AI Insider No. 17

Hey there. Welcome back to AI Insider. It’s hard to believe it’s September already. Even though I’m not headed back to school, I still feel like I’m teaching and learning. I hope you feel the same way, too. And I hope you learn something you didn’t already know as you scroll through this edition of AI Insider. Note: After today we will be on hiatus until Sept. 24. Got some traveling to do. Details to come. I will be back (along with the bots) at the end of the month. In the meantime, if you missed any editions you can always find them here. Happy September!

(Michelle Johnson via Adobe Firefly)

Why Don’t I Have Whiplash?

By Michelle Johnson

Some days I wonder how I don’t end up with whiplash. One minute somebody over here is saying that OpenAI (maker of ChatGPT), Google, and fill-in-the-blank are practically done for. The next minute somebody over there is giddy about some new thing OpenAI, Google and fill-in-the-blank have released: “This has no doubt catapulted them into the lead!” Right.

Several times this week I’ve read something akin to obits for OpenAI, Google and various fill-in-the-blanks.

Then, OpenAI announced its Enterprise (aka business) edition of ChatGPT (see item below), and suddenly the forecast is sunny again. But wait, look over there! It says that OpenAI stands to lose by pissing off Microsoft if they go trolling in Microsoft waters for enterprise customers. (Microsoft has invested billions in OpenAI.)

Swivel your head another direction and you’ll see that OpenAI  “is reportedly on track to generate more than $1 billion in revenue.” (Thanks for that update, Fast Company.)

Next, turn this way to see Google about to release Gemini, their set of large language models (basically the data used to train bots), which will “blow GPT-4 out of the water,” a writer for Techopedia says.

Snap back around and run smack into a report that Google is being accused of plagiarizing news sites with the Generative Search Experience summaries. They recently baked these into their search results page. Oh, no! This may lead to more news orgs blocking bots from scraping their content.

Who’s closed the door to ChatGPT sucking up their stuff for free? The New York Times, Amazon and CNN, to name a few. Blocked. See a more complete list here.

As for the Fill-in-the-Blanks, the startup image generators, startup chatbots, and startup “AI-fueled” apps, just keep jumping into the fray. Flailing around to get our attention.

Whatever. No need to worry your heads about any of this. Just keep reading AI Insider. Me (and the bots) will keep risking the whiplash, so you don’t have to.


(Michelle Johnson via Midjourney)

ChatGPT Gets Down to Business

By Bard for AI Insider

OpenAI just rolled out ChatGPT Enterprise, offering businesses a powerful, customizable AI chatbot. ChatGPT Enterprise is based on OpenAI’s ChatGPT, a large language model that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. (Ed. note: Bard appears to work in PR.)

ChatGPT Enterprise offers a number of features for businesses, including:

• Enterprise-grade security and privacy: All conversations are encrypted.

• Unlimited access to the higher-speed GPT-4, the latest and greatest version.

• Processing longer inputs: This means that ChatGPT Enterprise can handle conversations that are longer than the standard ChatGPT.

• Advanced data analysis capabilities: ChatGPT Enterprise can be used to analyze data to identify trends and patterns.

• Customization options: ChatGPT Enterprise can be customized to meet specific business needs.

[Ed. note: Why should you care about any of this? Well, if you work for a company, surely some committee is looking at how they are going to integrate AI into your workplace. (And how they are going to control your use of it.) So you may get access to this bag of goodies. Stay tuned.


(MJ via DreamStudio)

Another Slap in the Face

(Ed. Note: I really hate facial ID on my phone. It fails like 90 percent of the time. But my life doesn’t depend on it. There’s some new research showing that pedestrian detection software that runs in self-driving cars sometimes does a poor job detecting people of color and children. Here’s an update.)

By Bing for AI Insider

Self-driving cars are supposed to make our lives easier and safer, but what if they have a hidden flaw that puts some people at risk? A new study has revealed that the pedestrian detection systems in self-driving cars are less likely to detect children and people of color, due to bias in the artificial intelligence that powers them.

The study, published in the Journal of Medical Internet Research, tested eight popular pedestrian detectors and found that they were more accurate at detecting adults than children, and people with lighter skin tones than people with darker skin tones. This means that self-driving cars may not be able to avoid hitting these groups, especially in low-light or crowded situations.

The researchers blamed this problem on the open-source AI systems that many companies use to build the detectors. These systems are trained on data that may not reflect the diversity of the real world, resulting in biased outcomes. The researchers called for more regulation and oversight of self-driving car software to ensure fairness and safety for all pedestrians.

( Ed. Note: And speaking of bias, next item.…)


Meta Develops Tool to Evaluate AI Fairness

By Claude and Bing for AI Insider

Meta just launched a new tool that tests the fairness of computer vision systems across various demographic groups. Meta, the company that owns Facebook, announced that they are making their computer vision tool, DINOv2, available for commercial use. Computer vision involves training AI to interpret and understand digital images and videos. 

Along with the release of DINOV2, Meta is also introducing a new benchmark for evaluating the fairness of computer vision models. This benchmark, called FACET, will help researchers determine if a computer vision model is generating biased results.

Vision systems can develop biases against certain subgroups like race, gender, or age. 

Meta’s tool works by running a vision system on a diverse dataset of human images. It checks if the system performs equally well across different demographics. For example, does it identify people of all skin tones with similar accuracy?

During testing, the system found biases in popular computer vision models such as  misclassifying images of elderly people. By identifying these weak spots, companies can upgrade their AI systems to be more equitable.


(Michelle Johnson via Adobe Firefly.)

US Copyright Office Seeks Input on AI Policy

By Bing for AI Insider

The US Copyright Office is seeking public input as it prepares to propose new rules or regulations for generative AI. The agency has opened a comment period from August 30 to October 18, where anyone can submit their views on the following issues:

• How should AI models use copyrighted data in training?

• Can AI-generated material be copyrighted even without a human involved?

• How would copyright liability work with AI?

• How would AI affect publicity rights and unfair competition laws?

The comment period is the last step before the agency drafts new rules, which could have significant implications for the future of creative industries. The agency has already received applications to register works containing AI-generated material, and several lawsuits have been filed alleging copyright infringement by generative AI tools.

To share your opinion or learn more, see the Federal Register notice for details.


Note the differences between the source video and the AI edits. (ByteDance Researchers)

The Future of Video Editing?

By Michelle Johnson

Let’s be clear: This isn’t a thing yet. These are research projects, but they’re worth looking at because some day this tech may end up in consumer products coming to a computer near you. OK, now that that’s out of the way, check this out.

If you’ve ever used an AI image generator like Adobe Firefly or Midjourney, you may have heard of this thing called “inpainting.” That’s where you select a portion of an image and swap it out for something else. Like if your subject is wearing a blue sweater you type in a prompt and tell it to make it into a brown suit jacket or something. 

Or “outpainting” where you draw a border about the blank area of an image, say a picture of a lake surrounded by mountains, and it expands the image making the mountain range longer on each side. Basically, it imagines a longer mountain range and just draws it in.

So, what about doing stuff like that in a video? A still image is one frame. What do you do when confronted with the many frames that make up a moving video?

Up to now, there have been ways to do that, but they can lead to a loss in the quality of the video. It can get jerky, for instance.

So, a group of researchers who work for TikTok (yes, that TikTok), have come up with something called “Magic Edit.

Read their brief for the details, but the gist is that they have brought some of the AI features typically used for still images to video, and they claim to have done it in a simpler way than before.

Meanwhile, another team of researchers is working on video editing, too. Their AI tool is called INVE. I’ll be damned if I can explain the differences between what this team is doing versus the other one, so I put the question to Bing. 

Answer: “Here’s a simpler explanation: “MagicEdit and INVE are both tools that use artificial intelligence to make video editing easier and better. They both have ways to make sure the changes you make to a video look good and make sense. But, they do it in different ways. MagicEdit can change how things look and move in the video, while INVE can change specific parts of the video while keeping everything else the same.”


(Via Unsplash/Daniel Sone)

Paging Dr. ChatGPT!

By Bing and Michelle Johnson for AI Insider

If you live in the Boston area, you’re familiar with Mass General Brigham, one of several world-class medical institutions in these parts. Not surprisingly, their researchers are kicking the tires on AI.

A recent study found that ChatGPT showed impressive accuracy in a test of clinical decision making. The study was conducted by pasting portions of 36 standardized, published “clinical vignettes” (scenarios), into ChatGPT. The tool was first asked to come up with a set of possible diagnoses based on the patient’s initial information, which included age, gender, symptoms, and whether the case was an emergency. 

The researchers found that ChatGPT achieved an overall accuracy of 72% in clinical decision making across all vignettes. The AI chatbot performed best in making a final diagnosis, boasting a success rate of 77%. 

However, it was lowest-performing in making differential diagnoses (Ed. Note: meaning identifying multiple possible diagnoses for a patient’s symptoms), where it was only 60% accurate. It was also only 68% accurate in decisions such as figuring out what medications to treat the patient with after arriving at the correct diagnosis.

While ChatGPT’s accuracy is impressive, it still faces limitations. The researchers reported that the tool struggled with rare diseases and conditions that were not included in the training data. Additionally, the tool was not able to provide explanations for its decision-making process, which could be a concern for clinicians who need to understand the reasoning behind a diagnosis.


What The Actual Hell?

A new-ish feature showcasing AI stories that may make you go “hmmm.”

Beat the Hackers to Your Deepfake

🤔 Well, this woke me up in the morning. My old friend Ina, who’s a stellar tech journalist at Axios, took a look at HeyGen, a new AI thing that allows you to create a “deep fake” of yourself. Yes, really. Why let hackers do it when you can DIY? Upload a video of yourself talking and HeyGen will generate your own personal talking avatar. Enter a prompt and the words will come out of your virtual mouth. See Ina’s take on this. Of course there’s video. 😆

Here’s a look at getting a robotic dog to move via a text prompt. (Google Research)

Sit, Robo! Sit!

🤔 You know those scary looking robotic dogs like the ones they make at Boston Dynamics, just down the road from where I’m sitting? Well, some Google researchers have developed a new system to help robot dogs understand human language. The system is called SayTAP. It lets people give voice commands to robot dogs like “walk forward” or “sit.”  Those bad boys are spooky enough. Imagine talking to one. 


Random Shorts

AI for Yahoo Mail? Woo Hooo!: Do your friends try to shame you because you still have a Yahoo email address? Hah. Now you can brag that Yahoo Mail has introduced new AI-powered capabilities, including a “Shopping Saver” tool. It digs into your clogged inbox to find deals and coupons that you may have overlooked. It will even write a request to try and get you a discount after you’ve made a purchase.

Attention teachers: Here’s a guide to teaching with ChatGPT. OpenAi has produced this for teachers trying to figure out how to fit this tech into the curriculum It offers ideas for lesson plans, creating quizzes and AI Tutors and more. Just in time for back to school.

Perplexity Pro + Claude-2: Perplexity, the AI chatbot that you may have never seen unless you clicked on a link from this newsletter, has added Claude-2 for their Pro subscribers. Claude, launched by Anthropic, a company you may have never heard of unless you read this newsletter or follow AI news, is a stellar bot for doing everything from writing to answering all kinds of questions. This will no doubt beef up Perplexity, which was pretty good on its own. Step away from ChatGPT for a minute and try Perplexity and Claude for free. Or give Perplexity some money for Pro and get the 2-for-1 deal.

Survey Shows Americans Worry About AI: The respected Pew Research Center has a new report out on how Americans are feeling about AI. The survey found that 52% of Americans are more concerned than excited about the increased use of artificial intelligence. Only 10% of respondents said they are more excited than concerned, while 36% said they feel a mix of both emotions.


Fun and Useful Stuff to Try

Storly.AI: You know the saying — everyone has a story to tell.  Give Storly the details and it will crank out one for you. Here’s an example. This is the story about how I came to major in journalism written by Storly. Try it for free.

Tour Guidde – Did you get saddled at work with the task of creating training materials, FAQs or how-to’s? Check out Guidde. It will create videos of all of this and more. Free and paid.

Design Like an HGTV Star: Struggling to come up with a new look for your room? On a budget and need design ideas? Upload a picture of your room to CollovGPT and let whip up a new look. Free and paid.


Aht Gallery

In lieu of the aht gallery this week: And the winner is…

This is AI Insider’s new logo/mascot. Note: Only 16 of you voted, so no participation, no right to speak! It is what it is. And sorry to the two of you who voted for starting over from scratch. 😂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.