Meta's AI tools for advertisers can now create full new images, not just new backgrounds – TechCrunch

Comment
Meta is rolling out an expanded set of generative AI tools for advertisers, after first announcing a set of AI features last October. Now, instead of only being able to create different backgrounds for a product image, advertisers can also request full image variations, which offer AI-inspired ideas for the overall photo, including riffs that update the photo’s subject or product being advertised.
In one example, Meta shows how an existing ad creative showing a cup of coffee sitting outdoors next to coffee beans could be modified to present the cup, from a different angle, in front of lush greenery and coffee beans, evoking imagery reminiscent of a coffee farm.
This may not be a big deal if the image is only mean to encourage someone to visit a local coffee shop. But if it was the coffee cup itself that was for sale, then the AI variations Meta offers could be versions of the product that didn’t exist in real life.
The feature could be abused by advertisers who wanted to dupe consumers into buying products that don’t actually exist.
Meta admits this is a possible use case, saying that an advertiser could tailor the generated output with the coming Text Prompt feature with different colors of their product, from different angles and in different scenarios. Currently, the “different colors” option could be used to dupe customers into thinking a product looked different than it does in real life.
As Meta’s example demonstrates, the coffee cup itself could be transformed into different colors, or could be shown from different angles, where each cup has its own distinct swirl of foaming milk mixed in with the hot beverage.
However, Meta claims that it has strong guardrails in place to prevent its system from generating inappropriate ad content or low-quality images. This includes “pre-guardrails” to filter out images that its gen AI models don’t support and “post-guardrails” that filter out generated text and image content that doesn’t meet its quality bar or that it deems inappropriate. Plus, Meta said it stress-tested the feature using its Llama image and full ads image generation model with both internal and external experts to try to find unexpected ways it could be used, then addressed any vulnerabilities found.
Meta says this feature has already begun to roll out, and in the months ahead, advertisers will be able to provide text prompts to tailor the image’s variations, too.
Plus, Meta will now allow advertisers to add text overlays on their AI-generated images with a dozen of the most popular font typefaces available to choose from.
Another feature, image expansion, also introduced in October 2023, will now be available to Reels in addition to the Feed, across both Facebook and Instagram. This option leverages AI to help advertisers adjust their image assets to fit across different aspect ratios, like Reels and Feed. The idea is that advertisers could spend less time repurposing their creative assets for different surfaces. Meta says text overlay will work along with image expansion, too.
One advertiser, smartphone case maker Casetify, said that using Meta’s GenAI Background Generation feature led to a 13% increase in return on its ad spend. The company had tested the option with its Advantage+ shopping campaigns, where the AI features first became available in the fall. The updated AI features will also be available through Ads Manager via Advantage+ creative, as before.
Beyond images, Meta’s AI can be used to generate alternate versions of the ad headline, in addition to the ad’s primary text, which was already supported by leveraging the original copy. Meta says it’s testing the ability for this text to also sound like the brand’s voice and tone, using previous campaigns as its reference material. Text generation capabilities will be moved to Mets’s next-gen LLM (large language model), Meta Llama 3.
All the generative AI features will become available globally to advertisers by the end of the year.
Outside of the AI updates, Meta also announced it would expand its subscription service, Meta Verified for businesses, to new markets including Argentina, Mexico, Chile, Peru, France, and Italy. The service began testing last year in Australia, New Zealand and Canada. 
Now, Meta Verified will offer four different tiers to its subscription plan, all with the base features of a verified badge, account support, and impersonation monitoring. Higher tiers will include new tools like profile enhancements, tools for creating connections, and more ways to access customer support.
Meta Verified will be expanded to WhatsApp soon, the company also said.
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
The latest Fintech news and analysis, delivered every Sunday.
TechCrunch Mobility is your destination for transportation news and insight.
By submitting your email, you agree to our Terms and Privacy Notice.
Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…
Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.
Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.
As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…
Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…
This will enable developers to use the on-device model to power their own AI features.
It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…
Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.
In the coming months, Google says it will open up the Gemini Nano model to more developers.
As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.
Here are quick hits of the biggest news from the keynote as they are announced.
LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.
The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 
Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…
The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.
Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.
Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.
Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.
In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.
The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.
Google says that over 100,000 developers already tried the service.
The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 
The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.
This is a great example of a company using generative AI to open its software to more users.
Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 
People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.
A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.
Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.
At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.
Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.
Facebook
Youtube
LinkedIn
X
Instagram
Mastodon
Powered by WordPress VIP

source

Leave a Reply

The Future Is A.I. !
To top
en_USEnglish