Today: Dec 25, 2024

This Week in AI: Do shoppers actually want Amazon’s GenAI? | TechCrunch

This Week in AI: Do shoppers actually want Amazon’s GenAI? | TechCrunch
February 4, 2024

# This Week in AI: Do shoppers actually want Amazon’s GenAI?

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week, Amazon announced Rufus, an AI-powered shopping assistant trained on the e-commerce giant’s product catalog as well as information from around the web. Rufus lives inside Amazon’s mobile app, helping with finding products, performing product comparisons, and getting recommendations on what to buy.

From broad research at the start of a shopping journey such as ‘what to consider when buying running shoes?’ to comparisons such as ‘what are the differences between trail and road running shoes?’ … Rufus meaningfully improves how easy it is for customers to find and discover the best products to meet their needs,” Amazon writes in a blog post.

That’s all great. But my question is, who’s clamoring for it really?

I’m not convinced that GenAI, particularly in chatbot form, is a piece of tech the average person cares about — or even thinks about. Surveys support me in this. Last August, the Pew Research Center found that among those in the U.S. who’ve heard of OpenAI’s GenAI chatbot ChatGPT (18% of adults), only 26% have tried it. Usage varies by age of course, with a greater percentage of young people (under 50) reporting having used it than older. But the fact remains that the vast majority don’t know — or care — to use what’s arguably the most popular GenAI product out there.

GenAI has its well-publicized problems, among them a tendency to make up facts, infringe on copyrights, and spout bias and toxicity. Amazon’s previous attempt at a GenAI chatbot, Amazon Q, struggled mightily — revealing confidential information within the first day of its release. But I’d argue GenAI’s biggest problem now — at least from a consumer standpoint — is that there’s few universally compelling reasons to use it.

Sure, GenAI like Rufus can help with specific, narrow tasks like shopping by occasion (e.g. finding clothes for winter), comparing product categories (e.g. the difference between lip gloss and oil), and surfacing top recommendations (e.g. gifts for Valentine’s Day). Is it addressing most shoppers’ needs, though? Not according to a recent poll from ecommerce software startup Namogoo.

Namogoo, which asked hundreds of consumers about their needs and frustrations when it comes to online shopping, found that product images were by far the most important contributor to a good ecommerce experience, followed by product reviews and descriptions. The respondents ranked search as the fourth-most important and “simple navigation” the fifth; remembering preferences, information and shopping history was second-to-last.

The implication is that people generally shop with a product in mind; that search is an afterthought. Maybe Rufus will shake up the equation. I’m inclined to think not, particularly if it’s a rocky rollout (and it well might be given the reception of Amazon’s other GenAI shopping experiments) — but stranger things have happened I suppose.

Here are some other AI stories of note from the past few days:

Google Maps experiments with GenAI: Google Maps is introducing a GenAI feature to help you discover new places. Leveraging large language models (LLMs), the feature analyzes the over 250 million locations on Google Maps and contributions from more than 300 million Local Guides to pull up suggestions based on what you’re looking for.

GenAI tools for music and more: In other Google news, the tech giant released GenAI tools for creating music, lyrics, and images and brought Gemini Pro, one of its more capable LLMs, to users of its Bard chatbot globally.

New open AI models: The Allen Institute for AI, the nonprofit AI research institute founded by late Microsoft co-founder Paul Allen, has released several GenAI language models it claims are more “open” than others — and, importantly, licensed in such a way that developers can use them unfettered for training, experimentation, and even commercialization.

FCC moves to ban AI-generated calls: The FCC is proposing that using voice cloning tech in robocalls be ruled fundamentally illegal, making it easier to charge the operators of these frauds.

Shopify rolls out image editor: Shopify is releasing a GenAI media editor to enhance product images. Merchants can select a type from seven styles or type a prompt to generate a new background.

GPTs, invoked: OpenAI is pushing adoption of GPTs, third-party apps powered by its AI models, by enabling ChatGPT users to invoke them in any chat. Paid users of ChatGPT can bring GPTs into a conversation by typing “@” and selecting a GPT from the list.

OpenAI partners with Common Sense: In an unrelated announcement, OpenAI said that it’s teaming up with Common Sense Media, the nonprofit organization that reviews and ranks the suitability of various media and tech for kids, to collaborate on AI guidelines and education materials for parents, educators, and young adults.

Autonomous browsing: The Browser Company, which makes the Arc Browser, is on a quest to build an AI that surfs the web for you and gets you results while bypassing search engines, Ivan writes.

More machine learnings

Does an AI know what is “normal” or “typical” for a given situation, medium, or utterance? In a way, large language models are uniquely suited to identifying what patterns are most like other patterns in their datasets. And indeed that is what Yale researchers found in their research of whether an AI could identify “typicality” of one thing in a group of others. For instance, given 100 romance novels, which is the most and which the least “typical” given what the model has stored about that genre?

Interestingly (and frustratingly), professors Balázs Kovács and Gaël Le Mens worked for years on their own model, a BERT variant, and just as they were about to publish, ChatGPT came out and in many ways duplicated exactly what they’d been doing. “You could cry,” Le Mens said in a news release. But the good news is that the new AI and their old, tuned model both suggest that indeed, this type of system can identify what is typical and atypical within a dataset, a finding that could be helpful down the line. The two do point out that although ChatGPT supports their thesis in practice, its closed nature makes it difficult to work with scientifically.

Scientists at the University of Pennsylvania were looking at another odd concept to quantify: common sense. By asking thousands of people to rate statements, stuff like “you get what you give” or “don’t eat food past its expiry date” on how “commonsensical” they were. Unsurprisingly, although patterns emerged, there were “few beliefs recognized at the group level.”

“Our findings suggest that each person’s idea of common sense may be uniquely their own, making the concept less common than one might expect,” co-lead author Mark Whiting says. Why is this in an AI newsletter? Because like pretty much everything else, it turns out that something as “simple” as common sense, which one might expect AI to eventually have, is not simple at all! But by quantifying it this way, researchers and auditors may be able to say how much common sense an AI has, or what groups and biases it aligns with.

Speaking of biases, many large language models are pretty loose with the info they ingest, meaning if you give them the right prompt, they can respond in ways that are offensive, incorrect, or both. Latimer is a startup aiming to change that with a model that’s intended to be more inclusive by design.

Though there aren’t many details about their approach, Latimer says that their model uses Retrieval Augmented Generation (thought to improve responses) and a bunch of unique licensed content and data sourced from lots of cultures not normally represented in these databases. So when you ask about something, the model doesn’t go back to some 19th-century monograph to answer you. We’ll learn more about the model when Latimer releases more info.

Bottom Line

One thing an AI model can definitely do, though, is grow trees. Fake trees. Researchers at Purdue’s Institute for Digital Forestry made a super-compact model that simulates the growth of a tree realistically. This is one of those problems that seems simple but isn’t; you can simulate tree growth that works if you’re making a game or movie, sure, but what about serious scientific work?

Their new model is only about a megabyte, which is extremely small for an AI system. But of course DNA is even smaller and denser, and it encodes the whole tree, root to bud. The model still works in abstractions — it’s by no means a perfect simulation of nature — but it does show that the complexities of tree growth can be encoded in a relatively simple model.

Last up, a robot from Cambridge University researchers that can read braille faster than a human, with 90% accuracy. Why, you ask? Actually, it’s not for blind folks to use — the team decided this was an interesting and easily quantified task to test the sensitivity and speed of robotic fingertips. If it can read braille just by zooming over it, that’s a good sign!

OpenAI
Author: OpenAI

Don't Miss

The Morning After: Nissan and Honda plan to merge

The Morning After: Nissan and Honda plan to merge

Honda and Nissan have formally showed the rumors that they are pursuing
S&P 500 futures are little modified after certain begin to the vacation buying and selling week: Reside updates

S&P 500 futures are little modified after certain begin to the vacation buying and selling week: Reside updates

Buyers paintings at the New York Inventory Trade ground on Dec. 18,