Meta has announced significant enhancements to its AI offerings, introducing expanded language support and innovative features designed to enhance user experience.
 Meta AI now supports a wider range of languages and includes new functionalities such as creating stylized selfies.
Users can direct questions to Meta’s new AI model, Llama 3.1 405B, which is capable of handling complex queries.
Previous versions of Meta AI struggled with facts, numbers, and web searches.
 However, Llama 3.1 405B is claimed to perform better in math, coding, and scientific queries.
Users must manually switch to Llama 3.1 405B and are limited to a certain number of weekly queries before being switched to Llama 3.1 70B.
A new feature, Imagine Yourself, allows users to generate images from a photo and prompt.
For example, prompts like “Imagine me surfing” can be used to create personalized images. This feature is currently available in beta.
Meta’s terms of use now state that public posts and images on its platforms can be used for training AI models, raising data and privacy concerns. The opt-out process for users is reported to be convoluted.
Additionally, new editing tools have been introduced, allowing users to edit images with prompts to change objects within the photo.
An “Edit with AI” button will be available next month for fine-tuning options, and new shortcuts will facilitate easier sharing of AI-generated images across Meta apps.
Meta AI will also replace Meta Quest’s Voice Commands feature, enabling users to interact with their physical environment using AI.
For example, users can ask what top would match a pair of shorts while holding them up.
Furthermore, Meta AI is now available in 22 countries, including Argentina, Chile, Colombia, Ecuador, Mexico, Peru, and Cameroon, and supports additional languages such as French, German, Hindi, Hindi-Romanized Script, Italian, Portuguese, and Spanish, with more languages to be added in the future.