AI Insider No. 43

Greetings, AI Insiders! Depending on who you choose to believe, either OpenAI reigns supreme with the launch of their latest version of ChatGPT or Google took over the AI reins this week. There was so much news that we’re clearing out the newsletter to just focus on these two events. We’ve boiled down this avalanche of announcements to answer the question: “What’s in it for me?” As always, if you like what you see here, feel free to drop something in the tip jar to support AI Insider. 


Free Users Get Major ChatGPT Boost

By Michelle Johnson and Perplexity for AI Insider

Listen up, freeloaders. You win. OpenAI is giving you access to a boatload of features that have only been available to paid subscribers until now.

At a live-streamed announcement from its headquarters on Monday, OpenAI unveiled GPT-4o, its newest and fastest model, and made it available to everyone. The “o” stands for “omni,” meaning that this model is “multimodal.”

“GPT-4o reasons across voice, text, and vision,” OpenAI CTO Mira Murati, said in announcing the update.

The “Spring Update” event featured some jaw-dropping demos of GPT-4o’s voice capabilities. 

A female voice giggled and joked with the presenters. Reviewers compared the upgraded voice features to the very human-sounding bot featured in the movie “Her.

Demo videos released by OpenAI show 4o translating a conversation (Spanish-English) between two speakers in real time, two mobile phones singing in harmony, and more.

It’s a bit difficult to understand the comparison to “Her” or the remarkable features available via voice without hearing it. If you haven’t seen the videos yet, take a look. Go ahead. Click this, watch, and prepare to be amazed. Seriously, you’ll want to see this. And this. It ain’t Siri or Alexa. It’s truly like talking to someone in real time.

As usual, not every feature announced or demoed is available right now. Most are being slowly rolled out. And most weren’t showcased in the livestream.

How do you know if you have GPT4o yet? When you log in or refresh the ChatGPT screen, you’ll see a pop-up notification announcing it. Click the drop-down in the upper left corner to verify that it’s set to GPT-4o.


Here’s a breakdown of what free and paid Plus users can expect:

Free Users

Previously, subscribing to ChatGPT Plus, which costs $20 a month, was the only way to access the advanced features of GPT-4, OpenAI’s most powerful large language model (LLM). 

This update includes features that were once exclusive to paid subscribers, such as browsing the web for current information, uploading photos and documents to ask questions about them, and using advanced data analysis tools. Additionally, free users can now access custom GPT bots from the GPT Store and benefit from the Memory feature, which provides continuity in conversations.

Despite these enhancements, free users have some limitations. The number of messages you can send using GPT-4o is restricted based on usage and demand, although OpenAI has not specified the exact limits.

What if you exceed the limit? You get dropped back down to version 3.5.

Paid Users

Since they will offer so much for free, why not just cancel? Well, if you’re a frequent ChatGPT user, your best bet will continue to be the Plus account. Paid users enjoy higher usage limits and priority access during peak times.

You also get the newest features first. For instance, on Friday, OpenAI announced a major data analysis upgrade that will roll out to paid users first. Besides analyzing datasets, it can generate charts and graphs. 

“You can now customize and interact with bar, line, pie, and scatter plot charts in the conversation. Hover over chart elements, ask additional questions, or select colors. When ready, download to use in presentations or documents,” OpenAI said in their announcement.

It works with uploaded files and can access files in your Google Drive, too.

And, get this: A version of GPT-4o for Mac desktops is currently rolling out to paid users. The Windows version will be released later this year. No word when free subscribers will see this.

Google CEO Sundar Pichai presents at Google I/O, 2024. (Google)

Google I/O Hit By AI Tsunami

By Michelle Johnson and ChatGPT for AI Insider

On Tuesday, just a day after OpenAI created a buzz with the release of GPT-4o, Google unleashed a wave of announcements about its flagship Gemini AI at Google I/O, the annual developers conference. 

How many announcements? Here’s a company memo: “100 things we announced at I/O 2024.” And not surprisingly, Google focused on AI, echoing other companies looking to infuse AI into everything.

Some befuddled reviewers struggled to cover the two-hour event, which took place in Mountain View, California, because there was so much to sort through. And who has time to pick through that list of 100 things? (Me! Me!) This is a conference for developers, so I’ve picked through the list and kept the focus here on what we mere mortal, average consumers would care about.

Let’s start with Google’s bread and butter: 


A version of this has been available as an experimental feature in Google Labs for a while, but now all US users will get to experience “AI Overviews” in search. What’s that? Think of it as a mashup of the Gemini chatbot and the Google search we all know and mostly love.


If you need a quick answer to something, a response appears at the top of the results screen along with the usual relevant links. You can even ask follow-up questions.

They’re billing this as letting “Google do the Googling for you.” Check it out.

I’ve been using it for a while through Google Labs, and I like the integration. With this feature, Google is competing with Perplexity AI, the “answer engine.”

Gemini (FKA Bard)

If you’re a Gemini Advanced subscriber, you’ve now got Gemini 1.5 Pro! Feature available now or coming soon: Upload even bigger files (A 1,500 page PDF? Sure.), analyze data faster, have Gemini go beyond just suggesting activities for a trip, and actually create a custom itinerary, converse via Gemini Live, and create customized Gemini “Gems” (akin to OpenAI’s GPT Store feature). Details here.

Gemini 1.5 Pro will be available in a side panel in Gmail, Docs, Drive, Slides, and Sheets via Workspace Labs.


Geek speak: Google announced Places API, which makes it possible to generate AI summaries of places in their apps and websites. It’s using Gemini AI’s analysis of Maps 300 million Maps contributors.

What’s in it for you? TechCrunch offered up this example: “If a developer has a restaurant-booking app, this new capability will help users understand which restaurant is best for them. When users search for restaurants in the app, they’ll be able to quickly see all the most important information, like the house specialty, happy hour deals, and the place’s vibes.”


If you’re the type who likes to mess around with works in progress or just keep up to date on the latest developments, here are a few things that might catch your eye in this latest round of updates.

Images: Google released Imagen 3, an upgrade of the model that powers its still experimental ImageFX AI image generator. They say it turns out even more realistic looking images. We shall see. You’ll have to join a waitlist to gain access.

Video: The movie industry shook when OpenAI released Sora, which generates high-quality video from text prompts. Google probably shook, too. They’ve got their own video generation model now. It’s called Veo. They plan to bring its capabilities to YouTube Shorts and other products in the future. Here’s a video featuring filmmakers (Including Donald Glover) collaborating on a project using Veo. You’ll be able to try this model on Google’s VideoFX, announced at Google I/O. Join the waitlist here.

Music: From Google: “MusicFX has a new feature called “DJ Mode” that helps you mix beats by combining genres and instruments, using the power of generative AI to bring music stories to life.”

Everything Else, Directly from Google’s 100 List

53. Starting with Pixel later this year, Gemini Nano — Android’s built-in, on-device foundation model — will have multimodal capabilities. Beyond just processing text input, your Pixel phone will also be able to understand more information in context like sights, sounds and spoken language.

55. A new, opt-in scam protection feature that will use Gemini Nano’s on-device AI to help detect scam phone calls in a privacy-preserving way. Look for details later this year.

57. Soon, you’ll be able to use Gemini on Android to create and drag and drop generated images into Gmail, Google Messages, and more, or ask about the YouTube video you’re viewing.

58. If you have Gemini Advanced, you’ll also have the option to “Ask this PDF” to get an answer quickly without having to scroll through multiple pages.

59. Students can now use Circle to Search for homework help directly from select Android phones and tablets. This feature is powered by LearnLM — our new family of models based on Gemini, fine-tuned for learning.

62. Theft Detection Lock uses powerful Google AI to sense if your device has been snatched and quickly lock down the information on your phone.

65. Later this year, Google Play Protect will use on-device AI to help spot apps that attempt to hide their actions to engage in fraud or phishing.

68. We showed off how augmented reality content will be available directly in Google Maps, laying the foundation for an extended reality (XR) platform we’re building in collaboration with Samsung and Qualcomm for the Android ecosystem.

69. You can now catch up on episodes of your favorite shows on Max and Peacock or start a game of Angry Birds on select cars with Google built-in.

70. We are also bringing Google Cast to cars with Android Automotive OS, starting with Rivian in the coming months, so you can easily cast video content from your phone to the car.

71. Later this year, battery life optimizations are coming to watches with Wear OS 5. For example, running an outdoor marathon will consume up to 20% less power when compared to watches with Wear OS 4.

73. It’s now easier to pick what to watch on Google TV and other Android TV OS devices with personalized AI-generated descriptions, thanks to our Gemini model.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.