Unleash the Power of Your Vision with AI
Step into the future with Meta’s Ray-Ban glasses, where generative AI technology transforms the way we perceive and interact with our surroundings. Just ask your glasses!
Did my AI glasses just tell me that Optimus Prime in robot form appears to transform into a red and blue Peterbilt truck?
Meta has launched a new feature for its second-generation Ray-Ban glasses that lets you use AI to analyze what you see with the glasses’ cameras. The feature, which is currently in early-access mode, is one of the most advanced and innovative applications of wearable AI so far.
Meta’s CEO Mark Zuckerberg has said that the ultimate goal is to create AI that can “understand the context of what you’re doing, where you are, and what you need”.
How does it work?
The feature works by using voice commands and generative AI to interpret the images captured by the glasses’ cameras. You can ask the glasses to look at something and then perform a task, such as identifying an object, translating a text, making a caption, or suggesting a recipe. For example, you can say “Hey, Meta, look and tell me what plant this is” or "Hey, Meta, look and make a funny caption about this in front of me".
The glasses will take a photo of what you are looking at, send it to Meta’s AI engine, and then speak back the response to you. You can also see the photo and the AI response in the Meta View phone app that pairs with the glasses.
What can it do?
The feature can do a variety of tasks, depending on what you ask it to do. Some of the tasks include:
Recognizing objects, animals, plants, foods, colors, logos, etc.
Translating texts from different languages
Reading labels, menus, signs, etc.
Making captions, jokes, memes, etc.
Suggesting recipes, activities, products, etc.
Comparing items, prices, features, etc.
The feature is still in beta, so the accuracy and speed of the responses may vary. Meta says it is using anonymized query data to improve its AI services during this phase.
Why is it important?
The feature is important because it shows how Meta is pushing the boundaries of wearable AI and how it can enhance our everyday lives. By using voice and vision together, the feature can provide a more natural and intuitive way of interacting with the world and getting information. The feature can also be useful for various purposes, such as education, entertainment, accessibility, and more.
The feature is also a precursor of future AI that Meta plans to develop, which will use more forms of sensory data and be more seamless and aware. Meta’s CEO Mark Zuckerberg has said that the ultimate goal is to create AI that can “understand the context of what you’re doing, where you are, and what you need”.
When can I try it?
The feature has started to roll out to Meta’s second-generation Ray-Ban glasses as “Early Access”, but it is not available to everyone yet. Meta says it will gradually invite more people to join the early-access mode over time. The feature is expected to launch officially next year.
If you are interested in trying the feature, you will need to have a pair of Meta’s second-generation Ray-Ban glasses, which cost $300 and come in different styles and colors. You will also need to have the Meta View phone app and a Meta account. You can find more information on Ray-Ban’s website.