How Meta AI Compares to ChatGPT, DALL-E, and Google Gemini – Lifehacker

Last week, Meta released its latest update to Meta AI, further integrating the chatbot into the company’s apps and adding new capabilities like GIF generation. Debuting with Meta AI is Llama 3, the company’s latest large language model, and a new Meta.ai website for accessing the chatbot outside of Meta’s social media apps.
Llama 3 currently boasts up to 70 billion training parameters, with a version based on 400 billion parameters on the way, so it’s certainly a powerful tool. But for the average person, what can it do that other tools like ChatGPT, DALL-E, and Google Gemini can’t? Here’s where you might want to use Meta AI over the competition.
While the most popular image generation AIs have up until now been limited to still frames, Meta AI can now automatically animate its images for you. Technically, this was already possible in services like DALL-E, but required the user to prompt a sequence of frames and then stitch them together manually. Meta will now do the in-between work for you, although relying on the AI isn’t perfect, as I saw in my testing.
The process is pretty simple. First, ask the chatbot to generate your image.
Then, tell it to animate the image to turn it into a GIF. In my testing, telling it to make a GIF straight from the start hasn’t worked.
And that’s it. The results, though, haven’t been foolproof for me. The above example of a cat chasing a ball of yarn worked well enough, but this hummingbird seems to just be hovering in space and stabbing its feeder with its beak. 
Meanwhile, here’s the world’s youngest 17-year-old (my prompt just said “child”), amazed by some magical confetti springing up from his birthday cake. (I asked the AI to make a GIF of him blowing the candles out.)
Also unique to Meta AI is the ability to generate images as you type. Doing so is pretty simple, and while the images suffer from the same dream logic a lot of AI art does, the speed with which they pop up is impressive.
To generate images as you type, just go to Meta.ai (the feature doesn’t seem to work on the company’s social media apps quite yet, at least in my own testing), start with the “imagine” command, and begin describing your image. The chatbot will preview a new result for almost every character you type, allowing you to change the prompt as you type to tune it to your liking.
For instance, when I asked the chatbot to “imagine a dog chasing a ball,” it showed me a different breed of dog with almost every new character. When I told it to specify “imagine a dog chasing a ball down the,” it intuited that I wanted the dog to chase the ball down a street. When I specified that the dog itself should be getting chased by clowns, simply adding the word “and” after “chased by clowns” added a third clown to the frame.
On its own, it’s a dazzling gimmick. I’m used to AI art taking a few seconds to generate each individual frame, and this instead seems to push out the art in real time. But what takes it beyond gimmick territory is that pressing “enter” will give you four different results, and allow you to make a video of all the other options, so you can review your work and go back to the drawing board if needed.
Not every AI chatbot has live access to the internet, at least for free users. This is most noticeable on ChatGPT, where the free version’s data is limited to information from before January 2022. Meta AI, on the other hand, has access to both Google and Bing.
This means you can use Meta AI to answer questions requiring it to access current information. You could ask it “tell me the weather in NYC this weekend” or “did the White House announce anything important today?” You could even ask it to compile information, giving commands like “How is Dragon’s Dogma 2 reviewing?” or “What is the best movie to see this weekend?”
For each of these responses, Meta AI will tell you which search engine it used and which sources it referenced.
Meta isn’t alone in this ability, since Google Gemini also has live internet access. Where Meta’s chatbot differs is that it isn’t limited to Google Search, and that it handles sources differently. Sometimes, the sources that Google Gemini references are “not necessarily what Gemini used to generate its response”—they might just be sources Google thinks you would find useful. Meta AI, meanwhile, seems to always surface the sources it used with 100% certainty, but only those sources.
Whether this is a good or bad thing depends on who you ask, but Meta AI is now included with the Facebook, Instagram, Messenger, and WhatsApp apps. Integration is different depending on each app—in Messenger and WhatsApp, Meta AI will show up as a contact. In Instagram, it’ll show up in the search bar. And in Facebook, it will show up in the search bar and in your feed.
For the most part, this means integration is fairly negligible outside of the latter two apps. For search bar integration, Meta AI will now suggest searches and questions as you type them, although these can be ignored by simply not clicking on any suggestions with the Meta AI logo next to it (if you do click on them, you’ll leave your search and enter a conversation with the chatbot). Regular searches will still populate, and they’ll instead have a magnifying glass next to them.
For feed integration in Facebook, it’s a bit more robust. On certain posts, Meta AI will now show up below the post, offering to tell you more information about it or perhaps suggesting questions to ask about it. Your feed may occasionally also get interrupted with a generic card prompting you to use Meta AI, which functions something like an ad.
With its latest update, Meta AI takes a few steps ahead of its most mainstream competition, meaning it’s now probably your best first port of call for AI-generated gifs. Image generation is also strong and now has that unique “imagine” feature, and Meta AI has access to one more search engine than Google Gemini does.
Still, you’re likely going to want to continue using ChatGPT, DALL-E, and Google Gemini alongside it, just so you can browse different results, especially when it comes to the still often dreamlike nature of AI art.

Michelle Ehrhardt is Lifehacker’s Associate Tech Editor. She has been writing about tech and pop culture since 2014 and has edited for outlets including Gizmodo and Tom’s Hardware.
Lifehacker has been a go-to source of tech help and life advice since 2005. Our mission is to offer reliable tech help and credible, practical, science-based life advice to help you live better.
© 2001-2024 Ziff Davis, LLC., A ZIFF DAVIS COMPANY. ALL RIGHTS RESERVED.
Lifehacker is a federally registered trademark of Ziff Davis and may not be used by third parties without explicit permission. The display of third-party trademarks and trade names on this site does not necessarily indicate any affiliation or the endorsement of Lifehacker. If you click an affiliate link and buy a product or service, we may be paid a fee by that merchant.
Lifehacker supports Group Black and its mission to increase greater diversity in media voices and media ownerships.

source

Leave a Reply

The Future Is A.I. !
To top
en_USEnglish