Last week the nonprofit Future of Life Institute — a group that advocates for the responsible and ethical development and use of artificial intelligence (AI) — released an open letter calling for a pause on giant AI experiments.
Specifically, the missive asked AI labs to immediately pause for at least 6 months on the training of AI systems that are more powerful than GPT-4, the newest model of the large language model software behind ChatGPT. ChatGPT is the fastest-growing consumer application of all time; it reached 100 million monthly users in January, just two months after its release.
Signees to the letter calling for AI research and development to be more safe, robust, and loyal, included Evan Sharp, co-founder of Pinterest; Craig Peters, CEO of Getty Images; Steve Wozniak, co-founder of Apple; Valerie Pisano, president and CEO of MILA; and Jaan Tallinn, co-founder of Skype.
Here in Rochester, Dr. Walter Johnson, CEO at FLX AI, Inc., a multi-disciplinary and experienced team that offers scalable solutions that deploy artificial intelligence, was aware of the letter but nonplussed.
“I got into the field of AI in the mid-eighties, and it’s been a field of hype ever since,” said Johnson, whose vast experience includes serving as executive director at the University of Rochester’s Rochester Data Science Consortium and vice president of the Palo Alto Research Center (a Xerox company). “Generative AI is not going to change the world overnight. We’re at the very beginning.”
Generative AI – the type of deep learning AI utilized in newly released platforms like ChatGPT, Google Bard, and Bing Chat – differs from traditional, general AI. Whereas as general AI mostly recognizes patterns and can provide canned answers to set questions and keywords, generative AI creates brand new content, including text, images, audio, and more, that appears human-like in origin.
Craig Lamb, co-owner and partner at Envative, a Rochester-based custom software technology consulting firm, is not seeing much in the way of commercial businesses using generative AI yet, but he is seeing a rise in use of and continual advancement in machine learning – a subset of traditional AI – like predictive analytics, collecting and finding trends in healthcare data, predictive text, and product recommendations.
Machine learning also frees up employees from doing repetitive things so they can be deployed to other more meaningful tasks, Lamb noted.
“Machine learning doesn’t have the ability to actually learn,” Lamb said. “It recognizes patterns and trends and makes suggestions. It’s great for fraud detection and shopping recommendations. Real AI would have the ability to say, ‘This trend I’m seeing doesn’t look right,’ and will correct itself.”
In theory, generative AI could write this article (it did not!) but it would be based on information already found on the internet. It would not contain this quote from Dr. Johnson, because that quote came from the interview conducted just for this article: “Generative AI doesn’t care, it doesn’t hope, it doesn’t have taste buds, it doesn’t have children,” Johnson said. “It’s just a big statistical machine.”
Do businesses not involved in the development of generative AI need to be concerned or excited about it right now? Johnson and others interviewed for this piece believe so, but the extent will depend on the industry, needs, and wants.
“It’s kind of a novelty right now,” said Nick Cozzolino, information security director at the Bonadio Group, a top 50 CPA firm whose corporate headquarters is in Pittsford. “It’s a cool technology with a ton of potential.”
Cozzolino sees potential value in generative AI for mathematicians, accountants, coders, researchers, and other industries that depend on a vast amount of data. He also believes it will help reduce skill and knowledge gaps in some industries.
However, he warns that generative AI is “not a system of truth” and that “it’s important to understand the risks that come along with it.” The data a generative AI engine culls could be manipulated, not accurate, or even biased depending on how the system was trained. Using generative AI at this point could also be a security, privacy, and data risk for your company.
“You shouldn’t put private or personal information into the query,” said Cozzolino, about someone trying out ChatGPT or a similar platform. “You shouldn’t put in any information your company considers confidential or is protected or privileged. Anything you put into a query should be sanitized.”
Attorney Anna Mercado Clark, leader of the Data Security & Privacy practice team at Phillips Lytle LLP, also cautions against using publicly available generative AI platforms for much aside from fun and curiosity at this point.
If you do use them personally or professionally, she recommends looking at the licensing agreement closely as any information you input into the platform’s query may not remain exclusively yours once you input it.
“Be mindful of what information you’re putting out there and who gets to use that data,” said Clark, who also warns about the veracity of the information you get back.
Some of the industries already seeing issues with generative AI engines include academia, where Clark says generative AI “adds a new layer” of concern for professors when it comes to weeding out plagiarism and identifying original or AI-generated works.
For example, last week a college student in England used ChatGPT to write a letter that helped her get out of a parking ticket. Clark also mentioned the art world, where there are ethical and legal concerns, like intellectual property protection.
In January 2023 three artists whose works were used as AI training images without their consent filed a class-action lawsuit in California against Stability AI, DeviantArt, and Midjourney. From the suit: “The plaintiffs and the Class seek to end this blatant and enormous infringement of their rights before their professions are eliminated by a computer program powered entirely by their hard work.”
Caurie Putnam is a Rochester-area freelance writer.
a