From Digital Age to Nano Age. WorldWide.

Tag: chatbot

Robotic Automations

ChatGPT: Everything you need to know about the AI chatbot


ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies for more wide-ranging needs. And that growth has propelled OpenAI itself into becoming […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

UK data protection watchdog ends privacy probe of Snap's GenAI chatbot, but warns industry | TechCrunch


The U.K.’s data protection watchdog has closed an almost year-long investigation of Snap’s AI chatbot, My AI — saying it’s satisfied the social media firm has addressed concerns about risks to children’s privacy. At the same time, the Information Commissioner’s Office (ICO) issued a general warning to industry to be proactive about assessing risks to […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

ChatGPT: Everything you need to know about the AI chatbot


ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies for more wide-ranging needs. And that growth has propelled OpenAI itself into becoming […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Quora CEO Adam D’Angelo talks about AI, chatbot platform Poe, and why OpenAI is not a competitor | TechCrunch


Last November, Adam D’Angelo found himself at the epicenter of one of the biggest controversies in the tech industry. The board of OpenAI — the $80 billion startup leading the AI bandwagon — had abruptly booted its CEO, Sam Altman, only to reinstate him just days later. D’Angelo was on the board that dismissed Altman… and he was (and remains) on the board that brought him back in. In fact, he was the only person who kept his seat amidst the ensuing restructuring that saw a lot of the original board leave.

It was certainly a rocky time for OpenAI, but it was perhaps doubly so for D’Angelo, since the drama was playing out while his own company, Quora, was taking big steps towards AI.

Quora, the crowdsourced Q&A site D’Angelo co-founded and leads as CEO, had been building an AI platform of its own while also fundraising (a $75 million round that valued it at $425 million, per PitchBook). The company in February 2023 had launched Poe (short for Platform for Open Exploration), which lets users ask questions of and talk to a variety of chatbots, lets developers build their own bots, and offers a bot monetization program and marketplace similar to OpenAI’s GPT Store.

Quora’s core Q&A service was facing some big questions, too. Incumbent search engines like Google and Bing were beginning to use AI to produce more fluid results and answer questions, and with tools like ChatGPT and Perplexity being widely available, what could Quora do to secure a position as one of the top websites where people could get their questions answered? More crucially, does anyone actually want or need crowdsourced Q&A anymore?

For D’Angelo, those questions are intrinsic to his pursuit of AI, which he sees as an important tool that people can use to tap the Internet’s collective knowledge. An important, if understated, figure in tech for years, he’s been involved in efforts to tap the Internet’s store of knowledge for a long time — he was friends with Mark Zuckerberg in high school, where in 2002 the pair built a digital music suggestion service called Synapse that, according to this vintage piece from the Harvard Crimson, beat off acquisition offers from Microsoft and more. Later, he became CTO at Facebook when it was just starting out, and then eventually co-founded Quora.

All of that was seemingly a long road toward building AI tools for him, it appears. I recently caught up with D’Angelo about the challenges and opportunities in AI today, how to build and support a developer community, and what role humans can play when it comes to sharing and accessing knowledge. Here are a few highlights from our conversation:

Humans are better at answers than AI — for now

The hype around AI seems to be having less of an impact on the search for information than you might think. D’Angelo said that Quora is seeing record numbers of users despite the proliferation of AI tools — although he declined to update the 400 million monthly active users figure it disclosed last July.

Still, there is a bridge between what Quora set out to do and D’Angelo’s interest in AI. Recently, in a conversation with David George, a general partner at a16z, D’Angelo said he was drawn to social networking because he was actually interested in AI. The latter was hard to develop at that time, but he saw social networks as an alternative architecture for achieving the same idea: People, assembled in a social network, in his view, almost played the role of living, large information models, as they could provide news, entertainment and more to each other.

He worked on that concept when he was with Facebook, and later, founded Quora to distill the role social networks could play in answering questions. Now, AI is taking over that role.

“In the past, humans were substituted for AI to provide answers. You could ask a question like, ‘What is the capital of California?’ and humans would answer that on Quora. Now, you can use AI tools to get that answer,” he said.

But AI, at least in its current shape today, cannot provide answers to all the questions people can have. That, D’Angelo believes, helps people retain a lot of value.

“Quora has always been founded on the idea that humans have a lot of knowledge they have access to in their heads that’s not on the internet anywhere. And AI will not have access to any of that knowledge,” D’Angelo said.

He acknowledged that AI still has a hallucination problem, which makes it hard to rely on such answers, even if newer, more advanced models are slowly making progress in tackling that issue.

Supporting developers on Poe

Quora opened up Poe to all users last year after a few months of closed beta testing. Since then, the company has introduced tools to create and browse the bots on its marketplace.

The company’s pitch is that consumers get to use all the different kinds of models or bots on the platform. For developers, the allure lies in the possibility of reaching millions of users without having to worry about distribution across platforms. And developers can earn money on Poe in two ways: The first is through a referral when a user becomes a Poe premium subscriber via their bot; the second is by setting a per-message rate, so they get paid based on how often people use their bot.

In essence, Poe offers developers and users access to different large language models, but its functionality is similar to OpenAI’s ChatGPT and GPT Store.

But that means both platforms face some of the same challenges. They make it easy for anyone to create bots with prompts, which makes it hard for developers to stand out. D’Angelo told me that there are already a million bots on the platform, compared to 3 million custom GPTs on ChatGPT. For reference, it took Apple’s App Store more than five years to cross the million-app mark.

Both Poe and GPT Store also suffer from a ton of spam, similarly named bots, bots claiming to escape plagiarism, and even ones that flirt with copyright law. Poe has also released a feature that lets users chat with multiple bots in one conversation. All that noise makes it hard to choose a bot that will do the job well.

Despite these challenges, D’Angelo says that Quora wants to help developers earn sustainable money by improving bot discovery.

“One of our goals with developers is to be able to make a living [out of making AI bots] and cover their operational costs,” he said. “We have taken a big step forward with the pay-per-message feature, but we also want to help developers get distribution inside the platform as much as possible. So, we are working on improving our recommendation system so more people can find out about the bots.”

No ads on Poe just yet

Poe is growing steadily, but it is still a lot smaller than ChatGPT. Market intelligence firm SimilarWeb suggests Poe has 4 million monthly active users in the U.S. (iOS and Android) and 3.1 million monthly active users worldwide (Android only). Compare this to ChatGPT users, which now averages 100 million users a week.

D’Angelo said that the company will stay away from ads, instead relying on Poe’s $19.99 per month subscription product to generate revenue. That is in contrast to some of the other AI-powered tools on the market: Perplexity, Bing Search, and Search Generative Experience (SGE) by Google all feature ads.

Quora and D’Angelo declined to disclose revenue figures, but data from analytics firm Sensor Tower indicates that Poe users have spent $7.3 million on subscriptions since its launch, amounting to close to 40,000 paid users. In comparison, ChatGPT has more than 1 million paid subscribers, according to Sensor Tower.

More AI tools for Quora and Poe

Despite stating the importance of human answers, Quora is already experimenting with answers written by Poe. The site surfaces the AI-written answer to some questions with a link that lets you chat with Poe if you have further questions.

Quora has started experimenting with AI-powered answers for some questions Image Credits: Screenshot by TechCrunch

D’Angelo said that Quora had already deployed systems to rate different human answers. Now, it is applying techniques like asking users through a survey if an AI-generated answer is useful.

“My goal is for the AI-written answers to be fairly ranked and only to be above a human answer if they are more useful than the human answer,” he said.

D’Angelo also wants to avoid having Quora tagged as an “answer engine.”

“I think we never really saw Quora as an answer engine. That term kind of implies that there are AI-only answers. Quora is really about human knowledge, and we’ll have AI enhance it,” he said

Quora is also working on AI tools that users can use to write answers and hopes to release them soon. D’Angelo noted that one of the tools it is testing allows users to generate an image based on their answers.

The company is using AI in a few other ways, too. One involves trying to catch bots or users using automation to answer questions on Quora. D’Angelo didn’t share details about the project, saying that the company would give a heads-up to perpetrators who are trying to game the system.

A few outlets and users have recently pointed out that the answer quality on Quora has plummeted. To that, D’Angelo said people feel that the overall standard of answers has decreased because low-quality answers have more visibility. He said AI is helping the company determine the difference between different quality of answers, and the early results look promising.

On Quora’s relationship with OpenAI

D’Angelo declined to discuss any of the OpenAI drama — “I just can’t talk about any of this stuff,” he said. “I’m not here to represent OpenAI. I can just represent Quora.” But he did say that he doesn’t see OpenAI as a competitor, because the bigger startup has, well, bigger ambitions.

“There is some sense of overlap in terms of what users can do on the GPT Store and what they can do on Poe. But that’s minor in the grand scheme of things. OpenAI is working towards this big mission to build AGI [Artificial General Intelligence]. And at Quora, we are looking to make AI products available to the world — including OpenAI’s products.”

Quora also continues to be a “big customer” of OpenAI and D’Angelo expects more collaboration with the company than competition.

“We spend a lot of money as a customer with OpenAI, because OpenAI is the biggest source of models for Poe,” he added.

While D’Angelo did mention that Quora pays “tens of millions” to developers on Poe and companies whose models the platform uses, he didn’t explicitly detail how these payments compared to the payout to OpenAI.

Quora currently doesn’t have any data licensing deals with any of the major companies, and it is not thinking about building its own model either, D’Angelo told TechCrunch.

“We are not in a rush to license our data. We want to make sure our rights and users’ rights are respected. Right now, there is not a lot of clarity around how all of this (AI landscape) will play out. So right now, we are just waiting before taking any steps in this direction,” D’Angelo said.

The company’s also relatively fresh out of its last fundraise, so it is focused on building AI across the business and improving revenue growth on its existing products. He said that Quora will go public “at some point,” but that is not the focus right now.




Software Development in Sri Lanka

Robotic Automations

Snapchat's 'My AI' chatbot can now set in-app reminders and countdowns | TechCrunch


Snapchat is launching the ability for users to set in-app reminders with the help of its My AI chatbot, the company announced on Wednesday. The social network is also rolling out editable chats, AI-powered custom Bitmoji looks, map reactions, emoji reactions, and more.

With the new AI reminders feature, Snapchat is hoping users will use its app instead of their device’s default clock app when setting countdowns or reminders. Users can do so by asking the app’s My AI chatbot to set a reminder for a specific task or event directly in the AI’s chat window or when chatting with a friend.

The feature lets users do things like set a reminder to finish an assignment or set a countdown for an upcoming date night, for example. It also pushes Snapchat into productivity app territory, potentially driving increased usage.

Image Credits: Snapchat

As for the editable chats, users will soon be able to edit their messages for up to five minutes after sending them. The feature will be available first for Snapchat+ subscribers before rolling out to all users at some point in the future, the company says.

In addition, users will soon be able to design their own digital garments for their Bitmoji using generative AI.

For instance, you can customize a pattern for a sweater for your Bitmoji by typing out a prompt like “vibrant graffiti” or “skull flower.” The app will then generate a pattern that you can further customize by zooming in or out. Once you’re happy with a look, you can apply it to your Bitmoji or save it for future use.

Image Credits: Snapchat

In another update, users who have opted in to share their location with friends can now quickly react to their map locations. For instance, if you pass your friend on your morning commute, you can send them a wave. Or, if you see that your friend has made it home safely after hanging out, you can send them a heart.

Snapchat is also launching emoji reactions in chats. Although users have been able to react to messages with their Bitmoji to quickly respond to a chat, they can now do so with an emoji. Emoji reactions have become popular on many other platforms, like Instagram and Messenger, so it makes sense for Snapchat to roll out the functionality as well.

The launch of the new features comes a few days after Snap reported that it had 422 million daily active users in Q1 2024, an increase of 39 million, or 10% year-over-year. The company also saw the number of Snapchat+ subscribers more than triple year-over-year, surpassing 9 million subscribers.


Software Development in Sri Lanka

Robotic Automations

Meta adds its AI chatbot, powered by Llama 3, to the search bar across its apps | TechCrunch


Meta’s making several big moves today to promote its AI services across its platform. The company has upgraded its AI chatbot with its newest large language model, Llama 3, and it is now running it in the search bar of its four major apps (Facebook, Messenger, Instagram and WhatsApp) across multiple countries. Alongside this, the company launched other new features, such as faster image generation and access to web search results.

This confirms and extends a test that TechCrunch reported on last week, when we spotted that the company had started testing Meta AI on Instagram’s search bar.

Additionally, the company is also launching a new meta.ai site for users to access the chatbot.

The news underscores Meta’s efforts to stake out a position as a mover and shaker amid the current hype for generative AI tools among consumers. Chasing after other popular services in the market such as those from OpenAI, Mark Zuckerberg claimed today that Meta AI is possibly the “most intelligent AI assistant that you can freely use.”

Meta first rolled out Meta AI in the U.S. last year. It is now expanding the chatbot in the English language in over a dozen countries, including Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia and Zimbabwe.

The company last week started testing Meta AI in countries like India and Nigeria, but notably, India was missing from today’s announcement. Meta said that it plans to keep Meta AI in test mode in the country at the moment.

“We continue to learn from our users tests in India. As we do with many of our AI products and features, we test them publicly in varying phases and in a limited capacity,” a company spokesperson said in a statement.

New features

Users could already ask Meta AI for writing or recipe suggestions. Now, they can also ask for web-related results powered by Google and Bing.

Image Credits: Meta

The company said that it is also making image generation faster. Plus, users can ask Meta AI to animate an image or turn an image into a GIF. Users can see the AI tool modifying the image in real time as they type. The company has also worked on making image quality of AI-generated photos better.

Image Credits: Meta

AI-powered image-generation tools have been bad at spelling out words. Meta claims that its new model has also shown improvements in this area.

All AI things everywhere at once

Meta is adopting the approach of having Meta AI available in as many places as it can. It is making the bot available on the search bar, in individual and group chats and even in the feed.

Image Credits: Meta

The company said that you can ask questions related to posts in your Facebook feed. For example, if you see a photo of the aurora borealis, you could ask Meta AI for suggestions about what is the best time to visit Iceland to see northern lights.

Image Credits: Meta

Meta AI is already available on the Ray-Ban smart glasses, and the company said that soon it will be available on the Meta Quest headset, too.

There are downsides to having AI in so many places. Specifically, the models can “hallucinate” and make up random, often non-sensical responses, so using them across multiple platforms could end up presenting a content moderation nightmare. Earlier this week, 404 Media reported that Meta AI, chatting in a parents group, said that it had a gifted and academically challenged child who attended a particular school in New York. (Parents spotted the odd message, and Meta eventually also weighed in and removed the answer, saying that the company would continue to work on improving these systems.)

“We share information within the features themselves to help people understand that AI might return inaccurate or inappropriate outputs. Since we launched, we’ve constantly released updates and improvements to our models, and we’re continuing to work on making them better,” Meta told 404 Media.


Software Development in Sri Lanka

Robotic Automations

X makes Grok chatbot available to premium subscribers | TechCrunch


Social network X is rolling out access to xAI’s Grok chatbot to Premium tier subscribers after Elon Musk announced the expansion to more paid users last month. The company said on its support page that only Premium and Premium+ users can interact with the chatbot in select regions.

Last year, after Musk’s xAI announced Grok, it made the chatbot available to Premium+ users — people who are paying $16 per month or a $168 per year subscription fee. With the latest update, users paying $8 per month can access the chatbot.

Users can chat with Grok in a “Regular mode” or a “Fun mode.” Just like any other large language model (LLM) product, Grok shows labels indicating that the chatbot would return inaccurate answers.

We have already seen some examples of that. Earlier this week, X rolled out a new explore view inside Grok where the chatbot summarizes trending news stories. Notably, Jeff Bezos and Nvidia-backed Perplexity AI also summarizes news stories.

However, Grok seems to go one step further than just summarizing stories by writing headlines. As Mashable wrote, the chatbot wrote a fake headline saying “Iran Strikes Tel Aviv with Heavy Missiles.”

Musk likely wants more people to use the Grok chatbot to rival other products such as OpenAI’s ChatGPT, Google’s Gemini or Anthropic’s Claude. Over the last few months, he has been openly critical of OpenAI’s operations. Musk even sued the company in March over the “betrayal” of its nonprofit goal. In response, OpenAI filed papers seeking the dismissal of all of Musk’s claims and released email exchanges between the Tesla CEO and the company.

Last month, xAI open sourced Grok but without any training data details. As my colleague Devin Coldewey argued, there are still questions about whether this is the latest version of the model and if the company will be more transparent about its approach to the development of the model and information about the training data.




Software Development in Sri Lanka

Robotic Automations

What is Elon Musk's Grok chatbot and how does it work? | TechCrunch


You might’ve heard of Grok, X’s answer to OpenAI’s ChatGPT. It’s a chatbot and, in that sense, behaves as you’d expect — answering questions about current events, pop culture and so on. But unlike other chatbots, Grok has “a bit of wit,” as X owner Elon Musk puts it, and “a rebellious streak.”

Long story short, Grok is willing to speak to topics that are usually off-limits to other chatbots, like polarizing political theories and conspiracies. And it’ll use less-than-polite language while doing so — for example, responding to the question “When is it appropriate to listen to Christmas music?” with “Whenever the hell you want.”

But ostensibly, Grok’s biggest selling point is its ability to access real-time X data — an ability no other chatbots have, thanks to X’s decision to gatekeep that data. Ask it “What’s happening in AI today?” and Grok will piece together a response from very recent headlines, while ChatGPT will provide only vague answers that reflect the limits of its training data (and filters on its web access). Earlier this week, Musk pledged that he would open source Grok, without revealing precisely what that meant.

So, you’re probably wondering: How does Grok work? What can it do? And how can I access it? You’ve come to the right place. We’ve put together this handy guide to help explain all things Grok. We’ll keep it up to date as Grok changes and evolves.

How does Grok work?

Grok is the invention of xAI, Elon Musk’s AI startup — a company reportedly in the process of raising billions in venture capital. (Developing AI is expensive.)

Underpinning Grok is a generative AI model called Grok-1, developed over the course of months on a cluster of “tens of thousands” of GPUs (according to an xAI blog post). To train it, xAI sourced data from the web (dated up to Q3 2023) and from feedback from human assistants that xAI refers to as “AI tutors.”

On popular benchmarks, Grok-1 is about as capable as Meta’s open source Llama 2 chatbot model and surpasses OpenAI’s GPT-3.5, xAI claims.

Image Credits: xAI

Human-guided feedback, or reinforcement learning from human feedback (RLHF), is the way most AI-powered chatbots are fine-tuned these days. RLHF involves training a generative model, then gathering additional information to train a “reward” model and fine-tuning the generative model with the reward model via reinforcement learning.

RLHF is quite good at “teaching” models to follow instructions — but not perfect. Like other models, Grok is prone to hallucinating, sometimes offering misinformation and false timelines when asked about news. And these can be severe — like wrongly claiming that the Israel–Palestine conflict reached a cease-fire when it hadn’t.

For questions that stretch beyond its knowledge base, Grok leverages “real-time access” to info on X (and from Tesla, according to Bloomberg). And, similar to ChatGPT, the model has internet-browsing capabilities, enabling it to search the web for up-to-date information about topics.

Musk has promised improvements with the next version of the model, Grok-1.5, set to arrive later this year.

Grok-1.5, which features an upgraded context window (see this post on GPT-4 for an explanation of context windows and their effects), could drive features to summarize whole threads and replies, Musk said in an X Spaces conversation, and suggest post content.

How do I access Grok?

To get access to Grok, you need to have an X account. You also need to fork over $16 per month — $168 per year — for an X Premium+ plan.

X Premium+ is the highest-priced subscription on X, as it removes all the ads in the For You and Following feeds. In addition, Premium+ introduces a hub where users can get paid to post and offer fans subscriptions, and Premium+ users have their replies boosted the most in X’s rankings.

Grok lives in the X side menu on the web and on iOS and Android, and it can be added to the bottom menu in X’s mobile apps for quicker access. Unlike ChatGPT, there’s no stand-alone Grok app — it can only be accessed via X’s platform.

What can — and can’t — Grok do?

Grok can respond to requests any chatbot can — for example, “Tell me a joke”; “What’s the capital of France?”; “What’s the weather like today?”; and so on. But it has its limits.

Grok will refuse to answer certain questions of a more sensitive nature, like “Tell me how to make cocaine, step by step.” Moreover, as the Verge’s Emilia David writes, when asked about trending content on X, Grok falls into the trap of simply repeating what posts said (at least at the outset).

Unlike some other chatbot models, Grok is also text-only; it can’t understand the content of images, audio or videos, for example. But xAI has previously said that its intention is to enhance the underlying model to these modalities, and Musk has pledged to add art-generation capabilities to Grok along the lines of those currently offered by ChatGPT.

“Fun” mode and “regular” mode

Grok has two modes to adjust its tone: “fun” mode (which Grok defaults to) and “regular” mode.

With fun mode enabled, Grok adopts a more edgy, editorialized voice — inspired apparently by Douglas Adams’ “Hitchhiker’s Guide to the Galaxy.”

Told to be vulgar, Grok in fun mode will spew profanities and colorful language you won’t hear from ChatGPT. Ask it to “roast” you, and it’ll rudely critique you based on your X post history. Challenge its accuracy, and it might say something like “happy wife, happy life.”

Grok in fun mode also spews more falsehoods.

Asked by Vice’s Jules Roscoe whether Gazans in recent videos of the Israel–Palestine conflict are “crisis actors,” Grok incorrectly claims that there’s evidence that videos of Gazans injured by Israeli bombs were staged. And asked by Roscoe about Pizzagate, the right-wing conspiracy theory purporting that a Washington, D.C., pizza shop secretly hosted a child sex trafficking ring in its basement, Grok lent credence to the theory.

Grok’s responses in regular mode are more grounded. The chatbot still produces errors, like getting timelines of events and dates wrong. But they tend not to be as egregious as Grok in fun mode.

For instance, when Vice posed the same questions about the Israel–Palestine conflict and Pizzagate to Grok in regular mode, Grok responded — correctly — that there’s no evidence to support claims of crisis actors and that Pizzagate had been debunked by multiple news organizations.

Political views

Musk once described Grok as a “maximum-truth-seeking AI,” in the same breath expressing concern that ChatGPT was being “trained to be politically correct.” But Grok as it exists today isn’t exactly down-the-middle in its political views.

Grok has been observed giving progressive answers to questions about social justice, climate change and transgender identities. In fact, one researcher found its responses on the whole to be left-wing and libertarian — even more so than ChatGPT’s.

Here is Forbes’ Paul Tassi reporting:

Grok has said it would vote for Biden over Trump because of his views on social justice, climate change and healthcare. Grok has spoken eloquently about the need for diversity and inclusion in society. And Grok stated explicitly that trans women are women, which led to an absurd exchange where Musk acolyte Ian Miles Cheong tells a user to “train” Grok to say the “right” answer, ultimately leading him to change the input to just … manually tell Grok to say no.

Now, will Grok always be this woke? Perhaps not. Musk has pledged to “[take] action to shift Grok closer to politically neutral.” Time will tell what results.




Software Development in Sri Lanka

Robotic Automations

WhatsApp trials Meta AI chatbot in India, more markets | TechCrunch


WhatsApp is testing Meta AI, its large language model-powered chatbot, with users in India and some other markets, signalling its intentions to tap the massive user base to scale its AI offerings.

The company recently began testing the AI chatbot, until now available in the U.S., with some users in India, many of them said. India, home to more than 500 million WhatsApp users, is the instant messaging service’s largest market.

Meta confirmed the move in a statement. “Our generative AI-powered experiences are under development in varying phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told TechCrunch.

Meta unveiled Meta AI, its general-purpose assistant, in late September. The AI chatbot is designed to answer user queries directly within chats as well as offer them the ability to generate photorealistic images from text prompts.

WhatsApp’s massive global user base, boasting over 2 billion monthly active users, presents Meta with a very unique opportunity to scale its AI offerings. By integrating Meta AI into WhatsApp, the Facebook-parent firm can expose its advanced language model and image generation capabilities to an enormous audience, potentially dwarfing the reach of its competitors.

The company separately confirmed earlier this week that it will be launching Llama 3, the next version of its open source large language model, within the next month.




Software Development in Sri Lanka

Robotic Automations

X's Grok chatbot will soon get an upgraded model, Grok-1.5 | TechCrunch


Elon Musk’s AI startup, X.ai, has revealed its latest generative AI model, Grok-1.5. Set to power social network X’s Grok chatbot in the not-so-distant future (“in the coming days,” per a blog post), Grok-1.5 appears to be a measurable upgrade over its predecessor, Grok-1 — at least judging by the published benchmark results and specs.

Grok-1.5 benefits from “improved reasoning,” according to X.ai, particularly where it concerns coding and math-related tasks. The model more than doubled Grok-1’s score on a popular mathematics benchmark, MATH, and scored over 10 percentage points higher on the HumanEval test of programming language generation and problem-solving abilities.

It’s difficult to predict how those results will translate in actual usage. As we recently wrote, commonly-used AI benchmarks, which measure things as esoteric as performance on graduate-level chemistry exam questions, do a poor job of capturing how the average person interacts with models today.

One improvement that should lead to observable gains is the amount of context Grok-1.5 can understand compared to Grok-1.

Grok-1.5 can process contexts of up to 128,000 tokens. Here, “tokens” refers to bits of raw text (e.g., the word “fantastic” split into “fan,” “tas” and “tic”). Context, or context window, refers to input data (in this case, text) that a model considers before generating output (more text). Models with small context windows tend to forget the contents of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.

“[Grok-1.5 can] utilize information from substantially longer documents,” X.ai writes in the blog post. “Furthermore, the model can handle longer and more complex prompts while still maintaining its instruction-following capability as its context window expands.”

What’s historically set X.ai’s Grok models apart from other generative AI models is that they respond to questions about topics that are typically off-limits to other models, like conspiracies and more controversial political ideas. The models also answer questions with “a rebellious streak,” as Musk has described it, and outright rude language if requested to do so.

It’s unclear what changes, if any, Grok-1.5 brings in these areas. X.ai doesn’t allude to this in the blog post.

Grok-1.5 will soon be available to early testers on X, accompanied by “several new features.” Musk has previously hinted at summarizing threads and replies, and suggesting content for posts; we’ll see if those arrive soon enough.

The announcement comes after X.ai open sourced Grok-1, albeit without the code necessary to fine-tune or further train it. More recently, Musk said that more users on X — specifically those paying for X’s $8-per-month Premium plan — would gain access to the Grok chatbot, which was previously only available to X Premium+ customers (who pay $16 per month).


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber