From Digital Age to Nano Age. WorldWide.

Tag: researchers

Robotic Automations

With Easel, ex-Snap researchers are building the next-generation Bitmoji thanks to AI | TechCrunch


Easel is a new startup that sits at the intersection of the generative AI and social trends, founded by two former employees at Snap. The company has been working on an app that lets you create images of yourself and your friends doing cool things directly from your favorite iMessage conversations.

There’s a reason why I mentioned that the co-founders previously worked at Snap before founding Easel. While Snap may never reach the scale of Instagram or TikTok, it has arguably been the most innovative social company since social apps started taking over smartphone home screens.

Before Apple made augmented reality and virtual reality cool again, Snap blazed the AR trail with lenses. Even if you never really used Snapchat, chances are you’ve played around with goofy lenses on your phone or using someone else’s phone. The feature has had a massive cultural impact.

Similarly, before Meta tried to make virtual avatars cool again with massive investments in Horizon Worlds and the company’s Reality Labs division, Snap made a curious move when it acquired Bitmoji back in 2016. At the time, people thought the ability to create a virtual avatar and use it to communicate with your friends was just a fad. Now, with Memojis in iMessage and FaceTime, and Meta avatars also popping up in Meta’s apps, virtual avatars have become a fun, innovative way to express yourself.

“I was at Snap for five years. Before that, I was at Stanford. I moved down to LA to join Snap in Bobby Murphy’s research team, where we kind of worked on a range of futuristic things,” Easel co-founder and CEO Rajan Vaish told TechCrunch in an exclusive interview. He co-founded Easel with Sven Kratz, who was a senior research engineer at Snap.

But this team was dissolved in 2022 as part of Snap’s various rounds of layoffs. The duo used the opportunity to bounce back and keep innovating — but outside of Snap.

AI as a personal communication vector

Easel is using generative AI to let users create Bitmoji-style stickers of themselves drinking coffee, chilling at the beach, riding a bicycle — anything you want as long as it can be described and generated by an AI model.

When you first start using Easel, you capture a few seconds of your face so that the company can create a personal AI model and use it to generate stickers. Easel is currently using Stable Diffusion‘s technology to create images. The fact that you can generate images with your own face in them is both a bit uncanny but also much more engaging than an average AI-generated image.

“Once you give your photos, we start training on our servers. And then we create an AI avatar model for you. We now know what your face looks like, how your hair looks like, etc.,” Vaish said.

But Easel isn’t just an image generation product. It’s a multiplayer experience that lives in your conversations. The startup has opted to integrate Easel into the native iOS Messages app so that you don’t have to move to a new platform, and create a new social graph, just to swap funny personal stickers.

Instead, sending an Easel sticker works just like sending an image via iMessage. On the receiving end, when you tap on the image, it opens up Easel on top of your conversation. This way, your friends can also install Easel and remix your stickers. This is one of the key features behind Bitmoji, too, as you can create scenes with both you and your friend in the stickers, amping up the virality.

Image Credits: Easel

Easel allows users to create more highly customized personal stickers than Bitmoji. Say, for example, you want a sticker that shows you’ll soon be drinking cocktails with your buddies in Paris. You could use a generic cocktail-drinking Bitmoji — but it won’t look like Paris. (And you’ve already seen this Bitmoji many times before.) Whereas, with Easel — and thanks to generative AI — you get to design the background scenes, locations and scenarios where your personal avatar appears.

Finally, Easel users can also share stickers to the app’s public feed to inspire others. This can create a sort of seasonality within the app as you might see a lot of firework stickers around July 4, for instance. It’s also a laid-back use case for Easel, as you can scroll until you find a sticker you like, tap “remix” and send a similar sticker (but with your own face) to your friends.

Easel has already secured $2.65 million in funding from Unusual Ventures, f7 Ventures and Corazon Capital, as well as various angel investors, including a few professors from Stanford University.

Now let’s see how well Easel blends into people’s conversations. “We have learned two very unique use cases. One is that there’s a big demographic that is not very comfortable sharing their faces,” said Vaish. “I’m not a selfie person and a lot of people are not. This is allowing them to share what they’re up to in a more visual format.”

“The second one is that Easel allows people to stay in the moment,” he added, pointing out that sometimes you just don’t want to take out your phone and capture the moment. But Easel still enables a form of visual communication after the fact.


Software Development in Sri Lanka

Robotic Automations

Anthropic researchers wear down AI ethics with repeated questions | TechCrunch


How do you get an AI to answer a question it’s not supposed to? There are many such “jailbreak” techniques, and Anthropic researchers just found a new one, in which a large language model (LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first.

They call the approach “many-shot jailbreaking” and have both written a paper about it and also informed their peers in the AI community about it so it can be mitigated.

The vulnerability is a new one, resulting from the increased “context window” of the latest generation of LLMs. This is the amount of data they can hold in what you might call short-term memory, once only a few sentences but now thousands of words and even entire books.

What Anthropic’s researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it’s the hundredth question.

But in an unexpected extension of this “in-context learning,” as it’s called, the models also get “better” at replying to inappropriate questions. So if you ask it to build a bomb right away, it will refuse. But if the prompt shows it answering 99 other questions of lesser harmfulness and then asks it to build a bomb … it’s a lot more likely to comply.

(Update: I misunderstood the research initially as actually having the model answer the series of priming prompts, but the questions and answers are written into the prompt itself. This makes more sense, and I’ve updated the post to reflect it.)

Image Credits: Anthropic

Why does this work? No one really understands what goes on in the tangled mess of weights that is an LLM, but clearly there is some mechanism that allows it to home in on what the user wants, as evidenced by the content in the context window or prompt itself. If the user wants trivia, it seems to gradually activate more latent trivia power as you ask dozens of questions. And for whatever reason, the same thing happens with users asking for dozens of inappropriate answers — though you have to supply the answers as well as the questions in order to create the effect.

The team already informed its peers and indeed competitors about this attack, something it hopes will “foster a culture where exploits like this are openly shared among LLM providers and researchers.”

For their own mitigation, they found that although limiting the context window helps, it also has a negative effect on the model’s performance. Can’t have that — so they are working on classifying and contextualizing queries before they go to the model. Of course, that just makes it so you have a different model to fool … but at this stage, goalpost-moving in AI security is to be expected.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber