From Digital Age to Nano Age. WorldWide.

Tag: AI

Robotic Automations

NIST launches a new platform to assess generative AI | TechCrunch


The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, corporations and the broader public, today announced the launch of NIST GenAI, a new program spearheaded by NIST to assess generative AI technologies, including text- and image-generating AI.

A platform designed to evaluate various forms of generative AI tech, NIST GenAI will release benchmarks, help create “content authenticity” detection (i.e. deepfake-checking) systems and encourage the development of software to spot the source of fake or misleading information, explains NIST on its newly-launched NIST GenAI site and in a press release.

“The NIST GenAI program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies,” the press release reads. “These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content.”

NIST GenAI’s first project is a pilot study to build systems that can reliably tell the difference between human-created and AI-generated media, starting with text. (While many services purport to detect deepfakes, studies — and our own testing — have shown them to be unreliable, particularly when it comes to text.) NIST GenAI is inviting teams from academia, industry and research labs to submit either “generators” — AI systems to generate content — or “discriminators” — systems that try to identify AI-generated content.

Generators in the study must generate summaries provided a topic and a set of documents, while discriminators must detect if a given summary is AI-written or not. To ensure fairness, NIST GenAI will provide the data necessary to train generators and discriminators; systems trained on publicly available data won’t be accepted, including but not limited to open models like Meta’s Llama 3.

Registration for the pilot will begin May 1, with the results scheduled to be published in February 2025.

NIST GenAI’s launch — and deepfake-focused study — comes as deepfakes grow exponentially.

According to data from Clarity, a deepfake detection firm, 900% more deepfakes have been created this year compared to the same time frame last year. It’s causing alarm, understandably. A recent poll from YouGov found that 85% of Americans said they were concerned about the spread of misleading deepfakes online.

The launch of NIST GenAI is a part of NIST’s response to President Joe Biden’s executive order on AI, which laid out rules requiring greater transparency from AI companies about how their models work and established a raft of new standards, including for labeling content generated by AI.

It’s also the first AI-related announcement from NIST after the appointment of Paul Christiano, a former OpenAI researcher, to the agency’s AI Safety Institute.

Christiano was a controversial choice for his “doomerist” views; he once predicted that “there’s a 50% chance AI development could end in [humanity’s destruction]” Critics — including scientists within NIST, reportedly — fear Cristiano may encourage the AI Safety Institute to focus to “fantasy scenarios” rather than realistic, more immediate risks from AI.

NIST says that NIST GenAI will inform the AI Safety Institute’s work.


Software Development in Sri Lanka

Robotic Automations

Watch: Elon Musk’s big plans for xAI include raising $6 billion


TechCrunch recently broke the news that Elon Musk’s xAI is raising $6 billion at a pre-money valuation of $18 billion.

The deal hasn’t closed yet, so the numbers could change. But it sounds like Musk is making an ambitious pitch to investors about his 10-month-old startup — a rival to OpenAI, which he also co-founded and is currently suing for allegedly abandoning its initial commitment to focus on the good of humanity over profit.

You may be wondering: Doesn’t Musk have enough companies already? There’s Tesla, SpaceX, X (formerly Twitter), Neuralink, The Boring Company … maybe he should spend his time on the existing businesses that have struggles of their own.

But in the xAI pitch, Musk’s connection to these other companies is a feature, not a bug. xAI could get access to crucial training data from across his empire — and its technology could, in turn, help Tesla achieve its dream of true self-driving cars and bring its humanoid Optimus robot into factories.

Of course, Musk’s hype doesn’t always match up to reality. But with this impressive new funding, xAI could become an even more formidable competitor in the AI world. Hit play, then leave your thoughts below!


Software Development in Sri Lanka

Robotic Automations

Copilot Workspace is GitHub's take on AI-powered software engineering | TechCrunch


Is the future of software development an AI-powered IDE? GitHub’s floating the idea.

At its annual GitHub Universe conference in San Francisco on Monday, GitHub announced Copilot Workspace, a dev environment that taps what GitHub describes as “Copilot-powered agents” to help developers brainstorm, plan, build, test and run code in natural language.

Jonathan Carter, head of GitHub Next, GitHub’s software R&D team, pitches Workspace as somewhat of an evolution of GitHub’s AI-powered coding assistant Copilot into a more general tool, building on recently introduced capabilities like Copilot Chat, which lets developers ask questions about code in natural language.

“Through research, we found that, for many tasks, the biggest point of friction for developers was in getting started, and in particular knowing how to approach a [coding] problem, knowing which files to edit and knowing how to consider multiple solutions and their trade-offs,” Carter said. “So we wanted to build an AI assistant that could meet developers at the inception of an idea or task, reduce the activation energy needed to begin and then collaborate with them on making the necessary edits across the entire corebase.”

At last count, Copilot had over 1.8 million paying individual and 50,000 enterprise customers. But Carter envisions a far larger base, drawn in by feature expansions with broad appeal, like Workspace.

“Since developers spend a lot of their time working on [coding issues], we believe we can help empower developers every day through a ‘thought partnership’ with AI,” Carter said. “You can think of Copilot Workspace as a companion experience and dev environment that complements existing tools and workflows and enables simplifying a class of developer tasks … We believe there’s a lot of value that can be delivered in an AI-native developer environment that isn’t constrained by existing workflows.”

There’s certainly internal pressure to make Copilot profitable.

Copilot loses an average of $20 a month per user, according to a Wall Street Journal report, with some customers costing GitHub as much as $80 a month. And the number of rival services continues to grow. There’s Amazon’s CodeWhisperer, which the company made free to individual developers late last year. There are also startups, like MagicTabnineCodegen and Laredo.

Given a GitHub repo or a specific bug within a repo, Workspace — underpinned by OpenAI’s GPT-4 Turbo model — can build a plan to (attempt to) squash the bug or implement a new feature, drawing on an understanding of the repo’s comments, issue replies and larger codebase. Developers get suggested code for the bug fix or new feature, along with a list of the things they need to validate and test that code, plus controls to edit, save, refactor or undo it.

Image Credits: GitHub

The suggested code can be run directly in Workspace and shared among team members via an external link. Those team members, once in Workspace, can refine and tinker with the code as they see fit.

Perhaps the most obvious way to launch Workspace is from the new “Open in Workspace” button to the left of issues and pull requests in GitHub repos. Clicking on it opens a field to describe the software engineering task to be completed in natural language, like, “Add documentation for the changes in this pull request,” which, once submitted, gets added to a list of “sessions” within the new dedicated Workspace view.

Image Credits: GitHub

Workspace executes requests systematically step by step, creating a specification, generating a plan and then implementing that plan. Developers can dive into any of these steps to get a granular view of the suggested code and changes and delete, re-run or re-order the steps as necessary.

“If you ask any developer where they tend to get stuck with a new project, you’ll often hear them say it’s knowing where to start,” Carter said. “Copilot Workspace lifts that burden and gives developers a plan to start iterating from.”

Image Credits: GitHub

Workspace enters technical preview on Monday, optimized for a range of devices including mobile.

Importantly, because it’s in preview, Workspace isn’t covered by GitHub’s IP indemnification policy, which promises to assist with the legal fees of customers facing third-party claims alleging that the AI-generated code they’re using infringes on IP. (Generative AI models notoriously regurgitate their training data sets, and GPT-4 Turbo was trained partly on copyrighted code.)

GitHub says that it hasn’t determined how it’s going to productize Workspace, but that it’ll use the preview to “learn more about the value it delivers and how developers use it.”

I think the more important question is: Will Workspace fix the existential issues surrounding Copilot and other AI-powered coding tools?

An analysis of over 150 million lines of code committed to project repos over the past several years by GitClear, the developer of the code analysis tool of the same name, found that Copilot was resulting in more mistaken code being pushed to codebases and more code being re-added as opposed to reused and streamlined, creating headaches for code maintainers.

Elsewhere, security researchers have warned that Copilot and similar tools can amplify existing bugs and security issues in software projects. And Stanford researchers have found that developers who accept suggestions from AI-powered coding assistants tend to produce less secure code. (GitHub stressed to me that it uses an AI-based vulnerability prevention system to try to block insecure code in addition to an optional code duplication filter to detect regurgitations of public code.)

Yet devs aren’t shying away from AI.

In a StackOverflow poll from June 2023, 44% of developers said that they use AI tools in their development process now, and 26% plan to soon. Gartner predicts that 75% of enterprise software engineers will employ AI code assistants by 2028.

By emphasizing human review, perhaps Workspace can indeed help clean up some of the mess introduced by AI-generated code. We’ll find out soon enough as Workspace makes its way into developers’ hands.

“Our primary goal with Copilot Workspace is to leverage AI to reduce complexity so developers can express their creativity and explore more freely,” Carter said. “We truly believe the combination of human plus AI is always going to be superior to one or the other alone, and that’s what we’re betting on with Copilot Workspace.”


Software Development in Sri Lanka

Robotic Automations

How RPA vendors aim to remain relevant in a world of AI agents | TechCrunch


What’s the next big thing in enterprise automation? If you ask the tech giants, it’s agents — driven by generative AI.

There’s no universally accepted definition of agent, but these days the term is used to describe generative AI-powered tools that can perform complex tasks through human-like interactions across software and web platforms.

For example, an agent could create an itinerary by filling in a customer’s info on airlines’ and hotel chains’ websites. Or an agent could order the least expensive ride-hailing service to a location by automatically comparing prices across apps.

Vendors sense opportunity. ChatGPT maker OpenAI is reportedly deep into developing AI agent systems. And Google demoed a slew of agent-like products at its annual Cloud Next conference in early April.

“Companies should start preparing for wide-scale adoption of autonomous agents today,” analysts at Boston Consulting Group wrote recently in a report — citing experts who estimate that autonomous agents will go mainstream in three to five years.

Old-school automation

So where does that leave RPA?

Robotic process automation (RPA) came into vogue over a decade ago as enterprises turned to the tech to bolster their digital transformation efforts while reducing costs. Like an agent, RPA drives workflow automation. But it’s a much more rigid form, based on “if-then” preset rules for processes that can be broken down into strictly defined, discretized steps.

“RPA can mimic human actions, such as clicking, typing or copying and pasting, to perform tasks faster and more accurately than humans,” Saikat Ray, VP analyst at Gartner, explained to TechCrunch in an interview. “However, RPA bots have limitations when it comes to handling complex, creative or dynamic tasks that require natural language processing or reasoning skills.”

This rigidity makes RPA expensive to build — and considerably limits its applicability.

A 2022 survey from Robocorp, an RPA vendor, finds that of the organizations that say they’ve adopted RPA, 69% experience broken automation workflows at least once a week — many of which take hours to fix. Entire businesses have been made out of helping enterprises manage their RPA installations and prevent them from breaking.

RPA vendors aren’t naive. They’re well aware of the challenges — and believe that generative AI could solve many of them without hastening their platforms’ demise. In RPA vendors’ minds, RPA and generative AI-powered agents can peacefully co-exist — and perhaps one day even grow to complement each other.

Generative AI automation

UiPath, one of the larger players in the RPA market with an estimated 10,000+ customers, including Uber, Xerox and CrowdStrike, recently announced new generative AI features focused on document and message processing, as well as taking automated actions to deliver what UiPath CEO Bob Enslin calls “one-click digital transformation.”

“These features provide customers generative AI models that are trained for their specific tasks,” Enslin told TechCrunch. “Our generative AI powers workloads such as text completion for emails, categorization, image detection, language translation, the ability to filter out personally identifiable information [and] quickly answering any people-topic-related questions based off of knowledge from internal data.”

One of UiPath’s more recent explorations in the generative AI domain is Clipboard AI, which combines UiPath’s platform with third-party models from OpenAI, Google and others to — as Enslin puts it — “bring the power of automation to anyone that has to copy/paste.” Clipboard AI lets users highlight data from a form, and — leveraging generative AI to figure out the right places for the copied data to go — point it to another form, app, spreadsheet or database.

Image Credits: UiPath

“UiPath sees the need to bring action and AI together; this is where value is created,” Enslin said. “We believe the best performance will come from those that combine generative AI and human judgment — what we call human-in-the-loop — across end-to-end processes.”

Automation Anywhere, UiPath’s main rival, is also attempting to fold generative AI into its RPA technologies.

Last year, Automation Anywhere launched generative AI-powered tools to create workflows from natural language, summarize content, extract data from documents and — perhaps most significantly — adapt to changes in apps that would normally cause an RPA automation to fail.

“[Our generative AI models are] developed on top of [open] large language models and trained with anonymized metadata from more than 150 million automation processes across thousands of enterprise applications,” Peter White, SVP of enterprise AI and automation at Automation Anywhere, told TechCrunch. “We continue to build custom machine learning models for specific tasks within our platform and are also now building customized models on top of foundational generative AI models using our automation datasets.”

Next-gen RPA

Ray notes it’s important to be cognizant of generative AI’s limitations — namely biases and hallucinations — as it powers a growing number of RPA capabilities. But, risks aside, he believes generative AI stands to add value to RPA by transforming the way these platforms work and “creating new possibilities for automation.”

“Generative AI is a powerful technology that can enhance the capabilities of RPA platforms enabling them to understand and generate natural language, automate content creation, improve decision-making and even generate code,” Ray said. “By integrating generative AI models, RPA platforms can offer more value to their customers, increase their productivity and efficiency and expand their use cases and applications.”

Craig Le Clair, principal analyst at Forrester, sees RPA platforms as being ripe for expansion to support autonomous agents and generative AI as their use cases grow. In fact, he anticipates RPA platforms morphing into all-around toolsets for automation — toolsets that help deploy RPA in addition to related generative AI technologies.

“RPA platforms have the architecture to manage thousands of task automations and this bodes well for central management of AI agents,” he said. “Thousands of companies are well established with RPA platforms and will be open to using them for generative AI-infused agents. RPA has grown in part thanks to its ability to integrate easily with existing work patterns, through UI integration, and this will remain valuable for more intelligent agents going forward.”

UiPath is already beginning to take steps in this direction with a new capability, Context Grounding, that entered preview earlier in the month. As Enslin explained it to me, Context Grounding is designed to improve the accuracy of generative AI models — both first- and third-party — by converting business data those models might draw on into an “optimized” format that’s easier to index and search.

“Context Grounding extracts information from company-specific datasets, like a knowledge base or internal policies and procedures, to create more accurate and insightful responses,” Enslin said.

If there’s anything holding RPA vendors back, it’s the ever-present temptation to lock customers in, Le Clair said. He stressed the need for platforms to “remain agnostic” and offer tools that can be configured to work with a range of current — and future — enterprise systems and workflows.

To that, Enslin pledged that UiPath will remain “open, flexible and responsible.”

“The future of AI will require a combination of specialized AI with generative AI,” he continued. “We want customers to be able to confidently use all kinds of AI.”

White didn’t commit to neutrality exactly. But he emphasized that Automation Anywhere’s roadmap is being heavily shaped by customer feedback.

“What we hear from every customer, across every industry, is that their ability to incorporate automation in many more use cases has increased exponentially with generative AI,” he said. “With generative AI infused into intelligent automation technologies like RPA, we see the potential for organizations to reduce operating costs and increase productivity. Companies who fail to adopt these technologies will struggle to compete against others who embrace generative AI and automation.”


Software Development in Sri Lanka

Robotic Automations

Photo-sharing community EyeEm will license users photos to train AI if they don't delete them | TechCrunch


EyeEm, the Berlin-based photo-sharing community that exited last year to Spanish company Freepik, after going bankrupt, is now licensing its users’ photos to train AI models. Earlier this month, the company informed users via email that it was adding a new clause to its Terms & Conditions that would grant it the rights to upload users’ content to “train, develop, and improve software, algorithms, and machine-learning models.” Users were given 30 days to opt out by removing all their content from EyeEm’s platform. Otherwise, they were consenting to this use case for their work.

At the time of its 2023 acquisition, EyeEm’s photo library included 160 million images and nearly 150,000 users. The company said it would merge its community with Freepik’s over time.

Once thought of as a possible challenger to Instagram — or at least “Europe’s Instagram” — EyeEm had dwindled to a staff of three before selling to Freepik, TechCrunch’s Ingrid Lunden previously reported. Joaquin Cuenca Abela, CEO of Freepik, hinted at the company’s possible plans for EyeEm, saying it would explore how to bring more AI into the equation for creators on the platform.

As it turns out, that meant selling their work to train AI models.

Now, EyeEm’s updated Terms & Conditions reads as follows:

8.1 Grant of Rights – EyeEm Community

By uploading Content to EyeEm Community, you grant us regarding your Content the non-exclusive, worldwide, transferable and sublicensable right to reproduce, distribute, publicly display, transform, adapt, make derivative works of, communicate to the public and/or promote such Content.

This specifically includes the sublicensable and transferable right to use your Content for the training, development and improvement of software, algorithms and machine learning models. In case you do not agree to this, you should not add your Content to EyeEm Community.

The rights granted in this section 8.1 regarding your Content remains valid until complete deletion from EyeEm Community and partner platforms according to section 13. You can request the deletion of your Content at any time. The conditions for this can be found in section 13.

Section 13 details a complicated process for deletions that begins with first deleting photos directly — which would not impact content that had been previously shared to EyeEm Magazine or social media, the company notes. To delete content from the EyeEm Market (where photographers sold their photos) or other content platforms, users would have to submit a request to [email protected] and provide the Content ID numbers for those photos they wanted to delete and whether it should be removed from their account, as well, or the EyeEm market only.

Of note, the notice says that these deletions from EyeEm market and partner platforms could take up to 180 days. Yes, that’s right: requested deletions take up to 180 days but users only have 30 days to opt out. That means the only option is manually deleting photos one by one.

Worse still, the company adds that:

You hereby acknowledge and agree that your authorization for EyeEm to market and license your Content according to sections 8 and 10 will remain valid until the Content is deleted from EyeEm and all partner platforms within the time frame indicated above. All license agreements entered into before complete deletion and the rights of use granted thereby remain unaffected by the request for deletion or the deletion.

Section 8 is where licensing rights to train AI are detailed. In Section 10, EyeEm informs users they will forgo their right to any payouts for their work if they delete their account — something users may think to do to avoid having their data fed to AI models. Gotcha!

EyeEm’s move is an example of how AI models are being trained on the back of users’ content, sometimes without their explicit consent. Though EyeEm did offer an opt-out procedure of sorts, any photographer who missed the announcement would have lost the right to dictate how their photos were to be used going forward. Given that EyeEm’s status as a popular Instagram alternative had significantly declined over the years, many photographers may have forgotten they had ever used it in the first place. They certainly may have ignored the email, if it wasn’t already in a spam folder somewhere.

Those who did notice the changes were upset they were only given a 30-day notice and no options to bulk delete their contributions, making it more painful to opt out.

Requests for comment sent to EyeEm weren’t immediately confirmed, but given this countdown had a 30-day deadline, we’ve opted to publish before hearing back.

This sort of dishonest behavior is why users today are considering a move to the open social web. The federated platform, Pixelfed, which runs on the same ActivityPub protocol that powers Mastodon, is capitalizing on the EyeEm situation to attract users.

In a post on its official account, Pixelfed announced “We will never use your images to help train AI models. Privacy First, Pixels Forever.”




Software Development in Sri Lanka

Robotic Automations

Curio raises funds for Rio, an 'AI news anchor' in an app | TechCrunch


AI may be inching its way into the newsroom, as outlets like Newsweek, Sports Illustrated, Gizmodo, VentureBeat, CNET and others have experimented with articles written by AI. But while most respectable journalists will condemn this use case, there are a number of startups that think AI can enhance the news experience — at least on the consumer’s side. The latest to join the fray is Rio, an “AI news anchor” designed to help readers connect with the stories and topics they’re most interested in from trustworthy sources.

The new app, from the same team behind AI-powered audio journalism startup Curio, was first unveiled at last month’s South by Southwest Festival in Austin. It has raised funding from Khosla Ventures and the head of TED, Chris Anderson, who also backed Curio. (The startup says the round has not yet closed, so it can’t disclose the amount.)

Curio itself was founded in 2016 by ex-BBC strategist Govind Balakrishnan and London lawyer Srikant Chakravarti; Rio is a new effort that will expand the use of Curio’s AI technology.

First developed as a feature within Curio’s app, Rio scans headlines from trusted papers and magazines like Bloomberg, The Wall Street Journal, Financial Times, The Washington Post and others, and then curates that content into a daily news briefing you can either read or listen to.

In addition, the team says Rio will keep users from finding themselves in an echo chamber by seeking out news that expands their understanding of topics and encourages them to dive deeper.

Image Credits: Curio/Rio

In tests, Rio prepared a daily briefing presented in something of a Story-like interface with graphics and links to news articles you could tap on at the bottom of the screen that would narrate the article using an AI voice. (These were full articles, to be clear, not AI summaries.) You advance through the headlines in the same way as you would tap through a Story on a social media app like Instagram.

Curio says Rio’s AI technology won’t fabricate information and will only reference content from its trusted publishers partners. Rio won’t use publisher content to train an LLM (large language model) without “explicit consent,” it says.

Image Credits: Curio/Rio

Beyond the briefing, you can also interact with Rio in an AI chatbot interface where you can ask about other topics of interest. Suggested topics — like “TikTok ban” or “Ukraine War,” for example — appear as small pills above the text input box. We found the AI was sometimes a little slow to respond at times, but, otherwise, it performed as expected.

Plus, Rio would offer to create an audio episode for your queries if you want to learn more.

Co-founder Balakrishnan said that Curio users had asked Rio over 20,000 questions since it launched as a feature in Curio last May, which is why the company decided to spin out the tech into its own app.

“AI has us all wondering what’s true and what’s not. You can scan AI sites for quick answers, but trusting them blindly is a bit of a gamble,” noted Chakravarti in a statement released around Rio’s debut at SXSW. “Reliable knowledge is hard to come by. Only a lucky few get access to fact-checked, verified information. Rio guides you through the news, turning everyday headlines from trusted sources into knowledge. Checking the news with Rio leaves you feeling fulfilled instead of down.”

It’s hard to say if Rio is sticky enough to demand its standalone product, but it’s easy to imagine an interface like this at some point coming to larger news aggregators, like Google News or Apple News, perhaps, or even to individual publishers’ sites. Meanwhile, Curio will also continue to exit with a focus on audio news.

Curio is not the only startup looking to AI to enhance the news reading experience. Former Twitter engineers are building Particle, an AI-powered news reader, backed by $4.4 million. Another AI-powered news app, Bulletin, also launched to tackle clickbait along with offering news summaries. Artifact had also leveraged AI before exiting to TechCrunch’s parent company, Yahoo.

Rio is currently in early access, which means you’ll need an invitation to get in. Otherwise, you can join the app’s waitlist at rionews.ai. The company tells us it plans to launch publicly later this summer. (As a reward for reading to the bottom, five of you can use my own invite link to get in.)

 




Software Development in Sri Lanka

Robotic Automations

Watch: Between Rabbit’s R1 vs Humane’s Ai Pin, which had the best launch? | TechCrunch


After a successful unveiling at CES, Rabbit is letting journalists try out the R1 — a small orange gadget with an AI-powered voice interface. This comes just weeks after the launch of the Humane Ai Pin, which is similarly pitched as a new kind of mobile device with AI at its center.

While we’re still waiting on in-depth reviews (as opposed to an initial hands-on) of the R1, there are some pretty clear differences between the two devices.

Most noticeably, the Ai Pin is screen-less, relying instead on a voice interface and projector, while the R1 has a 2.88 inch screen (though it’s meant to be used for much more than typing in your WiFi password). And while the AI pin costs $699, plus a $24 monthly subscription, the R1 is just $199. Both, according to TechCrunch’s Brian Heater, show the value of good industrial design.

It sounds like neither the Ai Pin (which got some truly scathing reviews) nor the R1 makes a fully convincing case that it’s time to replace our smartphones — or that AI chatbots are the best way to get information from the internet. But if nothing else, it’s exciting that the hardware industry feels wide open again. Press play, then let us know if you’re playing to try either the R1 or the Ai Pin!


Software Development in Sri Lanka

Robotic Automations

Google's new 'Speaking practice' feature uses AI to help users improve their English skills | TechCrunch


Google is testing a new “Speaking practice” feature in Search that helps users improve their conversational English skills. The company told TechCrunch that the feature is available to English learners in Argentina, Colombia, India, Indonesia, Mexico, and Venezuela who have joined Search Labs, its program for users to experiment with early-stage Google Search experiences.

The company says the goal of the experiment is to help improve a user’s English skills by getting them to take part in interactive language learning exercises powered by AI to help them use new words in everyday scenarios.

Speaking practice builds on a feature that Google launched last October that is designed to help English learners improve their skills. While the feature launched last year allows English learners to practice speaking sentences in context and receive feedback on grammar and clarity, Speaking practice adds in the dimension of back and forth conversational practice.

The feature was first spotted by an X user, who shared screenshots of the functionality in action.

Speaking practice works by asking the user a conversational question that they need to respond to using specific words. According to the screenshots, one possible scenario could include the AI telling the user that they want to get into shape and then ask: “What should I do?” The user would then need to say a response that includes the words “exercise,” “heart,” and “tired.”

The idea behind the feature is to help English language learners hold a conversation in English, while also understanding how to properly use different words.

The launch of the new feature indicates that Google might be laying the groundwork for a true competitor to language learning apps like Duolingo and Babbel. This isn’t the first time that Google has dabbled in language learning and education tools. Back in 2019, Google launched a feature that allowed Search users to practice how to pronounce words properly.




Software Development in Sri Lanka

Robotic Automations

Stainless is helping OpenAI, Anthropic and others build SDKs for their APIs | TechCrunch


Besides a focus on generative AI, what do AI startups like OpenAI, Anthropic and Together AI share in common? They use Stainless, a platform created by ex-Stripe staffer Alex Rattray, to generate SDKs for their APIs.

Rattray, who studied economics at the University of Pennsylvania, has been building things for as long as he can remember, from an underground newspaper in high school to a bike-share program in college. Rattray picked up programming on the side while at UPenn, which led to a job at Stripe as an engineer on the developer platform team.

At Stripe, Rattray helped to revamp API documentation and launch the system that powers Stripe’s API client SDK. It’s while working on those projects Rattray observed there wasn’t an easy way for companies, including Stripe, to build SDKs for their APIs at scale.

“Handwriting the SDKs couldn’t scale,” he told TechCrunch. “Today, every API designer has to settle a million and one ‘bikeshed’ questions all over again, and painstakingly enforce consistency around these decisions across their API.”

Now, you might be wondering, why would a company need an SDK if it offers an API? APIs are simply protocols, enabling software components to communicate with each other and transfer data. SDKs, on the other hand, offer a set of software-crafting tools that plug into APIs. Without an SDK to accompany an API, API users are forced to read API docs and build everything themselves, which isn’t the best experience.

Rattray’s solution is Stainless, which takes in an API spec and generates SDKs in a range of programming languages including Python, TypeScript, Kotlin, Go and Java. As APIs evolve and change, Stainless’ platform pushes those updates with options for versioning and publishing changelogs.

“API companies today have a team of several people building libraries in each new language to connect to their API,” Rattray said. “These libraries inevitably become inconsistent, fall out of date and require constant changes from specialist engineers. Stainless fixes that problem by generating them via code.”

Stainless isn’t the only API-to-SDK generator out there. There’s LibLab and Speakeasy, to name a couple, plus longstanding open source projects such as the OpenAPI Generator.

Stainless, however, delivers more “polish” than most others, Rattray said, thanks partly to its use of generative AI.

“Stainless uses generative AI to produce an initial ‘Stainless config’ for customers, which is then up to them to fine-tune to their API,” he explained. “This is particularly valuable for AI companies, whose huge user bases includes many novice developers trying to integrate with complex features like chat streaming and tools.”

Perhaps that’s what attracted customers like OpenAI, Anthropic and Together AI, along with Lithic, LangChain, Orb, Modern Treasury and Cloudflare. Stainless has “dozens” of paying clients in its beta, Rattray said, and some of the SDKs it’s generated, including OpenAI’s Python SDK, are getting millions of downloads per week.

“If your company wants to be a platform, your API is the bedrock of that,” he said. “Great SDKs for your API drive faster integration, broader feature adoption, quicker upgrades and trust in your engineering quality.”

Most customers are paying for Stainless’ enterprise tier, which comes with additional white-glove services and AI-specific functionality. Publishing a single SDK with Stainless is free. But companies have to fork over between $250 per month and $30,000 per year for multiple SDKs across multiple programming languages.

Rattray bootstrapped Stainless “with revenue from day one,” he said, adding that the company could be profitable as soon as this year; annual recurring revenue is hovering around $1 million. But Rattray opted instead to take on outside investment to build new product lines.

Stainless recently closed a $3.5 million seed round with participation from Sequoia and The General Partnership.

“Across the tech ecosystem, Stainless stands out as a beacon that elevates the developer experience, rivaling the high standard once set by Stripe,” said Anthony Kline, partner at The General Partnership. “As APIs continue to be the core building blocks of integrating services like LLMs into applications, Alex’s first-hand experience pioneering Stripe’s API codegen system uniquely positions him to craft Stainless into the quintessential platform for seamless, high-quality API interactions.”

Stainless has a 10-person team based in New York. Rattray expects headcount to grow to 15 or 20 by the end of the year.


Software Development in Sri Lanka

Robotic Automations

Exclusive: Eric Schmidt-backed Augment, a GitHub Copilot rival, launches out of stealth with $252M


AI is supercharging coding — and developers are embracing it.

In a recent StackOverflow poll, 44% of software engineers said that they use AI tools as part of their development processes now and 26% plan to soon. Gartner estimates that over half of organizations are currently piloting or have already deployed AI-driven coding assistants, and that 75% of developers will use coding assistants in some form by 2028.

Ex-Microsoft software developer Igor Ostrovsky believes that soon, there won’t be a developer who doesn’use AI in their workflows. “Software engineering remains a difficult and all-too-often tedious and frustrating job, particularly at scale,” he told TechCrunch. “AI can improve software quality, team productivity and help restore the joy of programming.”

So Ostrovsky decided to build the AI-powered coding platform that he himself would want to use.

That platform is Augment, and on Wednesday it emerged from stealth with $252 million in funding at a near-unicorn ($977 million) post-money valuation. With investments from former Google CEO Eric Schmidt and VCs including Index Ventures, Sutter Hill Ventures, Lightspeed Venture Partners, Innovation Endeavors and Meritech Capital, Augment aims to shake up the still-nascent market for generative AI coding technologies.

“Most companies are dissatisfied with the programs they produce and consume; software is too often fragile, complex and expensive to maintain with development teams bogged down with long backlogs for feature requests, bug fixes, security patches, integration requests, migrations and upgrades,” Ostrovsky said. “Augment has both the best team and recipe for empowering programmers and their organizations to deliver high-quality software quicker.”

Ostrovsky spent nearly seven years at Microsoft before joining Pure Storage, a startup developing flash data storage hardware and software products, as a founding engineer. While at Microsoft, Ostrovsky worked on components of Midori, a next-generation operating system the company never released but whose concepts have made their way into other Microsoft projects over the last decade.

In 2022, Ostrovsky and Guy Gur-Ari, previously an AI research scientist at Google, teamed up to create Augment’s MVP. To fill out the startup’s executive ranks, Ostrovsky and Gur-Ari brought on Scott Dietzen, ex-CEO of Pure Storage, and Dion Almaer, formerly a Google engineering director and a VP of engineering at Shopify.

Augment remains a strangely hush-hush operation.

In our conversation, Ostrovsky wasn’t willing to say much about the user experience or even the generative AI models driving Augment’s features (whatever they may be) — save that Augment is using fine-tuned “industry-leading” open models of some sort.

He did say how Augment plans to make money: standard software-as-a-service subscriptions. Pricing and other details will be revealed later this year, Ostrovsky added, closer to Augment’s planned GA release.

“Our funding provides many years of runway to continue to build what we believe to be the best team in enterprise AI,” he said. “We’re accelerating product development and building out Augment’s product, engineering and go-to-market functions as the company gears up for rapid growth.”

Rapid growth is perhaps the best shot Augment has at making waves in an increasingly cutthroat industry.

Practically every tech giant offers its own version of an AI coding assistant. Microsoft has GitHub Copilot, which is by far the firmest entrenched with over 1.3 million paying individual and 50,000 enterprise customers as of February. Amazon has AWS’ CodeWhisperer. And Google has Gemini Code Assist, recently rebranded from Duet AI for Developers.

Elsewhere, there’s a torrent of coding assistant startups: MagicTabnineCodegen, Refact, TabbyML, Sweep, Laredo and Cognition (which reportedly just raised $175 million), to name a few. Harness and JetBrains, which developed the Kotlin programming language, recently released their own. So did Sentry (albeit with more of a cybersecurity bent). 

Can they all — plus Augment now — do business harmoniously together? It seems unlikely. Eye-watering compute costs alone make the AI coding assistant business a challenging one to maintain. Overruns related to training and serving models forced generative AI coding startup Kite to shut down in December 2022. Even Copilot loses money, to the tune of around $20 to $80 a month per user, according to The Wall Street Journal.

Ostrovsky implies that there’s momentum behind Augment already; he claims that “hundreds” of software developers across “dozens” of companies including payment startup Keeta (which is also Eric Schmidt-backed) are using Augment in early access. But will the uptake sustain? That’s the million-dollar question, indeed.

I also wonder if Augment has made any steps toward solving the technical setbacks plaguing code-generating AI, particularly around vulnerabilities.

An analysis by GitClear, the developer of the code analytics tool of the same name, found that coding assistants are resulting in more mistaken code being pushed to codebases, creating headaches for software maintainers. Security researchers have warned that generative coding tools tools can amplify existing bugs and exploits in projects. And Stanford researchers have found that developers who accept code recommendations from AI assistants tend to produce less secure code.

Then there’s copyright to worry about.

Augment’s models were undoubtedly trained on publicly available data, like all generative AI models — some of which may’ve been copyrighted or under a restrictive license. Some vendors have argued that fair use doctrine shields them from copyright claims while at the same time rolling out tools to mitigate potential infringement. But that hasn’t stopped coders from filing class action lawsuits over what they allege are open licensing and IP violations.

To all this, Ostrovsky says: “Current AI coding assistants don’t adequately understand the programmer’s intent, improve software quality nor facilitate team productivity, and they don’t properly protect intellectual property. Augment’s engineering team boasts deep AI and systems expertise. We’re poised to bring AI coding assistance innovations to developers and software teams.”

Augment, which is based in Palo Alto, has around 50 employees; Ostrovsky expects that number to double by the end of the year.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber