CB News
Generative artificial intelligence has become a buzzword this year, capturing the public’s imagination and prompting a rush between Microsoft and Alphabet to launch products with technology they believe will change the nature of work.
Here’s everything you need to know about this technology.
What is generative AI?
Like other forms of artificial intelligence, generative AI learns to take actions based on past data. It creates entirely new content (a text, an image, even computer code) based on this training, rather than simply categorizing or identifying data like other AIs.
The most famous generative AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI released late last year. The artificial intelligence that powers it is known as a great language model because it picks up a text message and writes a human-like response from it.
GPT-4, a newer model that OpenAI announced this week, is “multimodal” in that it can perceive not only text but also images. The president of OpenAI demonstrated on Tuesday how you could take a photo of a hand-drawn mockup for a website you wanted to build, and then generate a real one from that.
What is it for?
Demonstrations aside, companies are already putting generative AI to work.
The technology is useful for creating a first draft of marketing copy, for example, although it may require cleaning up because it’s not perfect. One example is CarMax Inc, which has used a version of OpenAI’s technology to summarize thousands of customer reviews and help buyers decide which used car to buy.
Generative AI can also take notes during a virtual meeting. You can compose and customize emails, and you can create slideshows. Microsoft Corp and Alphabet Inc’s Google each demonstrated these features in product announcements this week.
What’s the problem?
Nothing, although there are concerns about the technology’s potential abuse.
School systems have become concerned that students hand in essays written with artificial intelligence, undermining the hard work required for them to learn. Cybersecurity researchers have also expressed concern that generative AI could enable bad actors, even governments, to produce far more disinformation than before.
At the same time, the technology itself is prone to making mistakes. Factual inaccuracies promoted confidently by AI, called “hallucinations,” and responses that seem erratic like professing love to a user are reasons why companies have wanted to test the technology before making it widely available.
Is it just Google and Microsoft?
These two companies are at the forefront of research and investment in large language models, as well as the largest in putting generative AI into widely used software like Gmail and Microsoft Word. But they are not alone.
Big companies like Salesforce Inc as well as smaller ones like Adept AI Labs are creating their own competing AI or packaging technology from others to give users new powers through software.
How is Elon Musk involved?
He was one of the co-founders of OpenAI along with Sam Altman. But the billionaire stepped down from the startup’s board in 2018 to avoid a conflict of interest between OpenAI’s work and AI research conducted by Telsa Inc, the electric vehicle maker he leads.
Musk has expressed concern about the future of AI and has fought for a regulatory authority to ensure that the development of the technology serves the public interest.
“It’s a pretty dangerous technology. I’m afraid I could have done some things to speed it up,” he said near the end of Tesla Inc’s Investor Day event earlier this month.
“Tesla is doing good things in AI, I don’t know, this one stresses me out, I’m not sure what else to say.”
..