From Digital Age to Nano Age. WorldWide.

Tag: wear

Robotic Automations

Meta thinks it's a good idea for students to wear Quest headsets in class | TechCrunch


Meta continues to field criticism over how it handles younger consumers using its platforms, but the company is also planning new products that will cater to them. On Monday, the company announced that later this year it will be launching a new education product for Quest to position its VR headset as a go-to device for teaching in classrooms.

The product is yet to be named, but in a blog post describing it, Nick Clegg, the company’s president of global affairs — the ex-politician who has become’s Meta’s executive most likely to be delivering messaging around more controversial and divisive topics — said that it will include a hub for education-specific apps and features, as well as the ability to manage multiple headsets at once without having to update each device individually.

Business models for hardware and services also have yet to be spelled out. With nothing on the table, the company is framing it as a long-term bet.

“We accept that it’s going to take a long time, and we’re not going to be making any money on this anytime soon,” Clegg said in an interview with Axios.

On the plus side, a push into education could mean more diversified content for Quest users, along with a wider ecosystem of developers building for the platform — not the killer app critics say is still missing from VR, but at least more action.

On more problematic ground, the news is coming on the heels of a few other developments at the company that are less positive. Meta’s instant messaging service WhatsApp has been getting a lot of heat over the fact that it is lowering the minimum age for users to 13 in the UK and EU (it had previously been 16).

Monday’s announcement arrives on the heels of Meta prompting Quest users to confirm their age so it can provide teens and preteens with appropriate experiences.

The new initiative will roll out later this year and will only be available to institutions with students 13 years old and up. Meta said it will launch it first in the 20 markets where it already supports Quest for Business, Meta’s workplace-focused $14.99/month subscription. That list includes the U.S. Canada, the United Kingdom and several other English-speaking markets, along with Japan and much of western Europe.

There are a number of companies already in the market exploring the idea of VR in the classroom, with names like ImmersionVR, ClassVR and ArborVR, not to mention the likes of Microsoft, which has been pushing its HoloLens as an educational tool for a while now.

It’s not clear how ubiquitous VR use is in schools: one provider, ClassVR, claims that 40,000 classrooms worldwide are using its products.

But all the same, there remain hurdles to mass market usage. It’s not clear, for example, whether strapping a headset to someone’s face is necessarily a help in a live, educational environment, considering some of the research around young people already getting too much screen time as it is.

And another big question mark will relate to the cost of buying headsets — Quest 3’s, the latest headsets, start at around $500 apiece for basic models — buying apps and then subsequently supporting all of that infrastructure. Meta said that it has already donated Quest headsets to 15 universities in the U.S., but it’s not clear how far it will go to subsidise growth longer-term. 

 


Software Development in Sri Lanka

Robotic Automations

Anthropic researchers wear down AI ethics with repeated questions | TechCrunch


How do you get an AI to answer a question it’s not supposed to? There are many such “jailbreak” techniques, and Anthropic researchers just found a new one, in which a large language model (LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first.

They call the approach “many-shot jailbreaking” and have both written a paper about it and also informed their peers in the AI community about it so it can be mitigated.

The vulnerability is a new one, resulting from the increased “context window” of the latest generation of LLMs. This is the amount of data they can hold in what you might call short-term memory, once only a few sentences but now thousands of words and even entire books.

What Anthropic’s researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it’s the hundredth question.

But in an unexpected extension of this “in-context learning,” as it’s called, the models also get “better” at replying to inappropriate questions. So if you ask it to build a bomb right away, it will refuse. But if the prompt shows it answering 99 other questions of lesser harmfulness and then asks it to build a bomb … it’s a lot more likely to comply.

(Update: I misunderstood the research initially as actually having the model answer the series of priming prompts, but the questions and answers are written into the prompt itself. This makes more sense, and I’ve updated the post to reflect it.)

Image Credits: Anthropic

Why does this work? No one really understands what goes on in the tangled mess of weights that is an LLM, but clearly there is some mechanism that allows it to home in on what the user wants, as evidenced by the content in the context window or prompt itself. If the user wants trivia, it seems to gradually activate more latent trivia power as you ask dozens of questions. And for whatever reason, the same thing happens with users asking for dozens of inappropriate answers — though you have to supply the answers as well as the questions in order to create the effect.

The team already informed its peers and indeed competitors about this attack, something it hopes will “foster a culture where exploits like this are openly shared among LLM providers and researchers.”

For their own mitigation, they found that although limiting the context window helps, it also has a negative effect on the model’s performance. Can’t have that — so they are working on classifying and contextualizing queries before they go to the model. Of course, that just makes it so you have a different model to fool … but at this stage, goalpost-moving in AI security is to be expected.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber