From Digital Age to Nano Age. WorldWide.

Tag: cloud

Robotic Automations

IBM moves deeper into hybrid cloud management with $6.4B HashiCorp acquisition | TechCrunch


IBM wisely gravitated away from trying to be a pure cloud infrastructure vendor years ago, recognizing that it could never compete with the big three: Amazon, Microsoft and Google. It has since moved onto helping IT departments manage complex hybrid environments, using its financial clout to acquire a portfolio of high-profile companies.

It began with the $34 billion Red Hat acquisition in 2018, continued with the Apptio acquisition last year, and it kept it going on Wednesday when the company announced that it would be acquiring cloud management vendor HashiCorp for $6.4 billion.

With HashiCorp, Big Blue gets a set of cloud lifecycle management and security tools, and a company that is growing considerably faster than any of IBM’s other businesses — although the revenue is small by IBM standards: $155 million last quarter, up 15% over the prior year. That still makes it a healthy and growing business for IBM to add to its growing stable of hybrid cloud tools.

IBM CEO Arvind Krishna certainly sees the value of this piece to his company’s hybrid strategy, and he even threw in an AI reference for good measure. “HashiCorp has a proven track record of enabling clients to manage the complexity of today’s infrastructure and application sprawl. Combining IBM’s portfolio and expertise with HashiCorp’s capabilities and talent will create a comprehensive hybrid cloud platform designed for the AI era,” he said in a statement.

HashiCorp made headlines last year when it changed the license on its open source Terraform tool to be more friendly to the company. The community that helped build Terraform wasn’t happy and responded by launching a new open-source alternative called OpenTofu. HashiCorp recently accused the new community of misusing Terraform’s open-source code when it created the OpenTofu fork. Now that the company is part of IBM, it will be interesting to see if they continue to pursue this line of thinking.

It’s worth noting that Red Hat also made headlines last year when it changed its open-source licensing terms, also causing consternation in the open-source community. Perhaps these companies will fit well together, both from a software perspective and their shifting views on open source.

Just this week, the company introduced a new platform concept with the release of the Infrastructure Cloud, a concept that should fit nicely inside IBM’s hybrid cloud product catalog. While they didn’t add much in terms of functionality, it did unify the offerings under a single umbrella making it easier for sales and marketing to present to customers.

If IBM treats HashiCorp in a similar way to Red Hat, the company would maintain its independence inside the IBM family of products. AVOA, a research firm run by former CIO Tim Crawford, says the company would be wise to keep it neutral.

“My reservation would be if IBM moves away from Hashicorp’s neutral stance in working with multiple cloud providers and focuses on IBM Cloud. I suspect that would not be the case as IBM has recently shown how they are more open with other cloud providers,” Crawford wrote in a recent blog post.

HashiCorp was founded in 2012 and raised almost $350 million before going public in 2021.


Software Development in Sri Lanka

Robotic Automations

Google Cloud Next 2024: Everything announced so far | TechCrunch


Google’s Cloud Next 2024 event takes place in Las Vegas through Thursday, and that means lots of new cloud-focused news on everything from Gemini, Google’s AI-powered chatbot, to AI to devops and security. Last year’s event was the first in-person Cloud Next since 2019, and Google took to the stage to show off its ongoing dedication to AI with its Duet AI for Gmail and many other debuts, including expansion of generative AI to its security product line and other enterprise-focused updates and debuts.

Don’t have time to watch the full archive of Google’s keynote event? That’s OK; we’ve summed up the most important parts of the event below, with additional details from the TechCrunch team on the ground at the event. And Tuesday’s updates weren’t the only things Google made available to non-attendees — Wednesday’s developer-focused stream started at 10:30 a.m. PT.

Google Vids

Leveraging AI to help customers develop creative content is something Big Tech is looking for, and Tuesday, Google introduced its version. Google Vids, a new AI-fueled video creation tool, is the latest feature added to the Google Workspace.

Here’s how it works: Google claims users can make videos alongside other Workspace tools like Docs and Sheets. The editing, writing and production is all there. You also can collaborate with colleagues in real time within Google Vids. Read more

Gemini Code Assist

After reading about Google’s new Gemini Code Assist, an enterprise-focused AI code completion and assistance tool, you may be asking yourself if that sounds familiar. And you would be correct. TechCrunch Senior Editor Frederic Lardinois writes that “Google previously offered a similar service under the now-defunct Duet AI branding.” Then Gemini came along. Code Assist is a direct competitor to GitHub’s Copilot Enterprise. Here’s why

And to put Gemini Code Assist into context, Alex Wilhelm breaks down its competition with Copilot, and its potential risks and benefits to developers, in the latest TechCrunch Minute episode.

Google Workspace

Image Credits: Google

Among the new features are voice prompts to kick off the AI-based “Help me write” feature in Gmail while on the go. Another one for Gmail includes a way to instantly turn rough email drafts into a more polished email. Over on Sheets, you can send out a customizable alert when a certain field changes. Meanwhile, a new set of templates make starting a new spreadsheet easier. For the Doc lovers, there is support for tabs now. This is good because, according to the company, you can “organize information in a single document instead of linking to multiple documents or searching through Drive.” Of course, subscribers get the goodies first. Read more

Google also seems to have plans to monetize two of its new AI features for the Google Workspace productivity suite. This will look like $10/month/user add-on packages. One will be for the new AI meetings and messaging add-on that takes notes for you, provides meeting summaries and translates content into 69 languages. The other is for the introduced AI security package, which helps admins keep Google Workspace content more secure. Read more

Imagen 2

In February, Google announced an image generator built into Gemini, Google’s AI-powered chatbot. The company pulled it shortly after it was found to be randomly injecting gender and racial diversity into prompts about people. This resulted in some offensive inaccuracies. While we waited for an eventual re-release, Google came out with the enhanced image-generating tool, Imagen 2. This is inside its Vertex AI developer platform and has more of a focus on enterprise. Imagen 2 is now generally available and comes with some fun new capabilities, including inpainting and outpainting. There’s also what Google’s calling “text-to-live images” where you can now create short, four-second videos from text prompts, along the lines of AI-powered clip generation tools like RunwayPika and Irreverent Labs. Read more

Vertex AI Agent Builder

We can all use a little bit of help, right? Meet Google’s Vertex AI Agent Builder, a new tool to help companies build AI agents.

“Vertex AI Agent Builder allows people to very easily and quickly build conversational agents,” Google Cloud CEO Thomas Kurian said. “You can build and deploy production-ready, generative AI-powered conversational agents and instruct and guide them the same way that you do humans to improve the quality and correctness of answers from models.”

To do this, the company uses a process called “grounding,” where the answers are tied to something considered to be a reliable source. In this case, it’s relying on Google Search (which in reality could or could not be accurate). Read more

Gemini comes to databases

Google calls Gemini in Databases a collection of features that “simplify all aspects of the database journey.” In less jargony language, it’s a bundle of AI-powered, developer-focused tools for Google Cloud customers who are creating, monitoring and migrating app databases. Read more

Google renews its focus on data sovereignty

Image Credits: MirageC / Getty Images

Google has offered cloud sovereignties before, but now it is focused more on partnerships rather than building them out on their own. Read more

Security tools get some AI love

Image Credits: Getty Images

Google jumps on board the productizing generative AI-powered security tool train with a number of new products and features aimed at large companies. Those include Threat Intelligence, which can analyze large portions of potentially malicious code. It also lets users perform natural language searches for ongoing threats or indicators of compromise. Another is Chronicle, Google’s cybersecurity telemetry offering for cloud customers to assist with cybersecurity investigations. The third is the enterprise cybersecurity and risk management suite Security Command Center. Read more

Nvidia’s Blackwell platform

One of the anticipated announcements is Nvidia’s next-generation Blackwell platform coming to Google Cloud in early 2025. Yes, that seems so far away. However, here is what to look forward to: support for the high-performance Nvidia HGX B200 for AI and HPC workloads and GB200 NBL72 for large language model (LLM) training. Oh, and we can reveal that the GB200 servers will be liquid-cooled. Read more

Chrome Enterprise Premium

Meanwhile, Google is expanding its Chrome Enterprise product suite with the launch of Chrome Enterprise Premium. What’s new here is that it mainly pertains mostly to security capabilities of the existing service, based on the insight that browsers are now the endpoints where most of the high-value work inside a company is done. Read more

Gemini 1.5 Pro

Image Credits: Google

Everyone can use a “half” every now and again, and Google obliges with Gemini 1.5 Pro. This, Kyle Wiggers writes, is “Google’s most capable generative AI model,” and is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. Here’s what you get for that half: The amount of context that it can process, which is from 128,000 tokens up to 1 million tokens, where “tokens” refers to subdivided bits of raw data (like the syllables “fan,” “tas” and “tic” in the word “fantastic”). Read more

Open source tools

Image Credits: Getty Images

At Google Cloud Next 2024, the company debuted a number of open source tools primarily aimed at supporting generative AI projects and infrastructure. One is Max Diffusion, which is a collection of reference implementations of various diffusion models that run on XLA, or Accelerated Linear Algebra, devices. Then there is JetStream, a new engine to run generative AI models. The third is MaxTest, a collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. Read more

Axion

Image Credits: Google

We don’t know a lot about this one, however, here is what we do know: Google Cloud joins AWS and Azure in announcing its first custom-built Arm processor, dubbed Axion. Frederic Lardinois writes that “based on Arm’s Neoverse 2 designs, Google says its Axion instances offer 30% better performance than other Arm-based instances from competitors like AWS and Microsoft and up to 50% better performance and 60% better energy efficiency than comparable X86-based instances.” Read more

The entire Google Cloud Next keynote

If all of that isn’t enough of an AI and cloud update deluge, you can watch the entire event keynote via the embed below.

Google Cloud Next’s developer keynote

On Wednesday, Google held a separate keynote for developers. They offered a deeper dive into the ins and outs of a number of tools outlined during the Tuesday keynote, including Gemini Cloud Assist, using AI for product recommendations and chat agents, ending with a showcase from Hugging Face. You can check out the full keynote below.


Software Development in Sri Lanka

Robotic Automations

Google Cloud Next 2024: Watch the keynote on Gemini AI, enterprise reveals right here | TechCrunch


It’s time for Google’s annual look up to the cloud, this time with a big dose of AI.

At 9 a.m. PT Tuesday, Google Cloud CEO Thomas Kurian kicked off the opening keynote for this year’s Google Cloud Next event, and you can watch the archive of their reveals above, or right here.

After this week we’ll know more about Google’s attempts to help the enterprise enter the age of AI. From a deeper dive into Gemini, the company’s AI-powered chatbot, to securing AI products and implementing generative AI into cloud applications, Google will continue to cover a wide range of topics.

We’re also keeping tabs on everything Google’s announcing at Cloud Next 2024, from Google Vids to Gemini Code Assist to Google Workspace updates.

And for those more interested in Google’s details and reveals for developers, their Developer Keynote started off at 11:30am PT Wednesday, and you can catch up on that full stream right here or via the embed below.


Software Development in Sri Lanka

Robotic Automations

With Vertex AI Agent Builder, Google Cloud aims to simplify agent creation | TechCrunch


AI agents are the new hot craze in generative AI. Unlike the previous generation of chatbots, these agents can do more than simply answer questions. They can take actions based on the conversation, and even interact with back-end transactional systems to take actions in an automated manner.

On Tuesday at Google Cloud Next, the company introduced a new tool to help companies build AI agents.

“Vertex AI Agent Builder allows people to very easily and quickly build conversational agents,” Google Cloud CEO Thomas Kurian said. “You can build and deploy production-ready, generative AI-powered conversational agents and instruct and guide them the same way that you do humans to improve the quality and correctness of answers from models.”

The no-code product builds upon Google’s Vertex AI Search and Conversation product released previously. It’s also built on top of the company’s latest Gemini large language models and relies both on RAG APIs and vector search, two popular methods used industry-wide to reduce hallucinations, where models make up incorrect answers when they can’t find an accurate response.

Part of the way the company is improving the quality of the answers is through a process called “grounding,” where the answers are tied to something considered to be a reliable source. In this case, it’s relying on Google Search (which in reality could or could not be accurate).

“We’re now bringing you grounding in Google Search, bringing the power of the world’s knowledge that Google Search offers through our grounding service to models. In addition, we also support the ability to ground against enterprise data sources,” Kurian said. The latter might be more suitable for enterprise customers.

Image Credits: Frederic Lardinois/TechCrunch

In a demo, the company used this capability to create an agent that analyzes previous marketing campaigns to understand a company’s brand style, and then apply that knowledge to help generate new ideas that are consistent with that style. The demo analyzed over 3,000 brand images, descriptions, videos and documents related to this fictional company’s products stored on Google Drive. It then helped generate pictures, captions and other content based on its understanding of the fictional company’s style.

Although you can build any type of agent, this particular example would put Google directly in competition with Adobe, which released its creative generative AI tool Firefly last year and GenStudio last month to help build content that doesn’t stray from the company’s style. The flexibility is there to build anything, but the question is whether you want to buy something off the shelf instead if it exists.

The new capabilities are already available, according to Google. It supports multiple languages and offers country-based API endpoints in the U.S. and EU.


Software Development in Sri Lanka

Robotic Automations

Google injects generative AI into its cloud security tools | TechCrunch


At its annual Cloud Next conference in Las Vegas, Google on Tuesday introduced new cloud-based security products and services — in addition to updates to existing products and services — aimed at customers managing large, multi-tenant corporate networks.

Many of the announcements had to do with Gemini, Google’s flagship family of generative AI models.

For example, Google unveiled Gemini in Threat Intelligence, a new Gemini-powered component of the company’s Mandiant cybersecurity platform. Now in public preview, Gemini in Threat Intelligence can analyze large portions of potentially malicious code and let users perform natural language searches for ongoing threats or indicators of compromise, as well as summarize open source intelligence reports from around the web.

“Gemini in Threat Intelligence now offers conversational search across Mandiant’s vast and growing repository of threat intelligence directly from frontline investigations,” Sunil Potti, GM of cloud security at Google, wrote in a blog post shared with TechCrunch. “Gemini will navigate users to the most relevant pages in the integrated platform for deeper investigation … Plus, [Google’s malware detection service] VirusTotal now automatically ingests OSINT reports, which Gemini summarizes directly in the platform.”

Elsewhere, Gemini can now assist with cybersecurity investigations in Chronicle, Google’s cybersecurity telemetry offering for cloud customers. Set to roll out by the end of the month, the new capability guides security analysts through their typical workflows, recommending actions based on the context of a security investigation, summarizing security event data and creating breach and exploit detection rules from a chatbot-like interface.

And in Security Command Center, Google’s enterprise cybersecurity and risk management suite, a new Gemini-driven feature lets security teams search for threats using natural language while providing summaries of misconfigurations, vulnerabilities and possible attack paths.

Rounding out the security updates were privileged access manager (in preview), a service that offers just-in-time, time-bound and approval-based access options designed to help mitigate risks tied to privileged access misuse. Google’s also rolling out principal access boundary (in preview, as well), which lets admins implement restrictions on network root-level users so that those users can only access authorized resources within a specifically defined boundary.

Lastly, Autokey (in preview) aims to simplify creating and managing customer encryption keys for high-security use cases, while Audit Manager (also in preview) provides tools for Google Cloud customers in regulated industries to generate proof of compliance for their workloads and cloud-hosted data.

“Generative AI offers tremendous potential to tip the balance in favor of defenders,” Potti wrote in the blog post. “And we continue to infuse AI-driven capabilities into our products.”

Google isn’t the only company attempting to productize generative AI–powered security tooling. Microsoft last year launched a set of services that leverage generative AI to correlate data on attacks while prioritizing cybersecurity incidents. Startups, including Aim Security, are also jumping into the fray, aiming to corner the nascent space.

But with generative AI’s tendency to make mistakes, it remains to be seen whether these tools have staying power.


Software Development in Sri Lanka

Robotic Automations

Nvidia's next-gen Blackwell platform will come to Google Cloud in early 2025 | TechCrunch


It’s Google Cloud Next in Las Vegas this week, and that means it’s time for a bunch of new instance types and accelerators to hit the Google Cloud Platform. In addition to the new custom Arm-based Axion chips, most of this year’s announcements are about AI accelerators, whether built by Google or from Nvidia.

Only a few weeks ago, Nvidia announced its Blackwell platform. But don’t expect Google to offer those machines anytime soon. Support for the high-performance Nvidia HGX B200 for AI and HPC workloads and GB200 NBL72 for large language model (LLM) training will arrive in early 2025. One interesting nugget from Google’s announcement: The GB200 servers will be liquid-cooled.

This may sound like a bit of a premature announcement, but Nvidia said that its Blackwell chips won’t be publicly available until the last quarter of this year.

Image Credits: Frederic Lardinois/TechCrunch

Before Blackwell

For developers who need more power to train LLMs today, Google also announced the A3 Mega instance. This instance, which the company developed together with Nvidia, features the industry-standard H100 GPUs but combines them with a new networking system that can deliver up to twice the bandwidth per GPU.

Another new A3 instance is A3 confidential, which Google described as enabling customers to “better protect the confidentiality and integrity of sensitive data and AI workloads during training and inferencing.” The company has long offered confidential computing services that encrypt data in use, and here, once enabled, confidential computing will encrypt data transfers between Intel’s CPU and the Nvidia H100 GPU via protected PCIe. No code changes required, Google says. 

As for Google’s own chips, the company on Tuesday launched its Cloud TPU v5p processors — the most powerful of its homegrown AI accelerators yet — into general availability. These chips feature a 2x improvement in floating point operations per second and a 3x improvement in memory bandwidth speed.

Image Credits: Frederic Lardinois/TechCrunch

All of those fast chips need an underlying architecture that can keep up with them. So in addition to the new chips, Google also announced Tuesday new AI-optimized storage options. Hyperdisk ML, which is now in preview, is the company’s next-gen block storage service that can improve model load times by up to 3.7x, according to Google.

Google Cloud is also launching a number of more traditional instances, powered by Intel’s fourth- and fifth-generation Xeon processors. The new general-purpose C4 and N4 instances, for example, will feature the fifth-generation Emerald Rapids Xeons, with the C4 focused on performance and the N4 on price. The new C4 instances are now in private preview, and the N4 machines are generally available today.

Also new, but still in preview, are the C3 bare-metal machines, powered by older fourth-generation Intel Xeons, the X4 memory-optimized bare metal instances (also in preview) and the Z3, Google Cloud’s first storage-optimized virtual machine that promises to offer “the highest IOPS for storage optimized instances among leading clouds.”


Software Development in Sri Lanka

Robotic Automations

Google goes all in on generative AI at Google Cloud Next | TechCrunch


This week in Las Vegas, 30,000 folks came together to hear the latest and greatest from Google Cloud. What they heard was all generative AI, all the time. Google Cloud is first and foremost a cloud infrastructure and platform vendor. If you didn’t know that, you might have missed it in the onslaught of AI news.

Not to minimize what Google had on display, but much like Salesforce last year at its New York City traveling road show, the company failed to give all but a passing nod to its core business — except in the context of generative AI, of course.

Google announced a slew of AI enhancements designed to help customers take advantage of the Gemini large language model (LLM) and improve productivity across the platform. It’s a worthy goal, of course, and throughout the main keynote on Day 1 and the Developer Keynote the following day, Google peppered the announcements with a healthy number of demos to illustrate the power of these solutions.

But many seemed a little too simplistic, even taking into account they needed to be squeezed into a keynote with a limited amount of time. They relied mostly on examples inside the Google ecosystem, when almost every company has much of their data in repositories outside of Google.

Some of the examples actually felt like they could have been done without AI. During an e-commerce demo, for example, the presenter called the vendor to complete an online transaction. It was designed to show off the communications capabilities of a sales bot, but in reality, the step could have been easily completed by the buyer on the website.

That’s not to say that generative AI doesn’t have some powerful use cases, whether creating code, analyzing a corpus of content and being able to query it, or being able to ask questions of the log data to understand why a website went down. What’s more, the task and role-based agents the company introduced to help individual developers, creative folks, employees and others, have the potential to take advantage of generative AI in tangible ways.

But when it comes to building AI tools based on Google’s models, as opposed to consuming the ones Google and other vendors are building for its customers, I couldn’t help feeling that they were glossing over a lot of the obstacles that could stand in the way of a successful generative AI implementation. While they tried to make it sound easy, in reality, it’s a huge challenge to implement any advanced technology inside large organizations.

Big change ain’t easy

Much like other technological leaps over the last 15 years — whether mobile, cloud, containerization, marketing automation, you name it — it’s been delivered with lots of promises of potential gains. Yet these advancements each introduce their own level of complexity, and large companies move more cautiously than we imagine. AI feels like a much bigger lift than Google, or frankly any of the large vendors, is letting on.

What we’ve learned with these previous technology shifts is that they come with a lot of hype and lead to a ton of disillusionment. Even after a number of years, we’ve seen large companies that perhaps should be taking advantage of these advanced technologies still only dabbling or even sitting out altogether, years after they have been introduced.

There are lots of reasons companies may fail to take advantage of technological innovation, including organizational inertia; a brittle technology stack that makes it hard to adopt newer solutions; or a group of corporate naysayers shutting down even the most well-intentioned initiatives, whether legal, HR, IT or other groups that, for a variety of reasons, including internal politics, continue to just say no to substantive change.

Vineet Jain, CEO at Egnyte, a company that concentrates on storage, governance and security, sees two types of companies: those that have made a significant shift to the cloud already and that will have an easier time when it comes to adopting generative AI, and those that have been slow movers and will likely struggle.

He talks to plenty of companies that still have a majority of their tech on-prem and have a long way to go before they start thinking about how AI can help them. “We talk to many ‘late’ cloud adopters who have not started or are very early in their quest for digital transformation,” Jain told TechCrunch.

AI could force these companies to think hard about making a run at digital transformation, but they could struggle starting from so far behind, he said. “These companies will need to solve those problems first and then consume AI once they have a mature data security and governance model,” he said.

It was always the data

The big vendors like Google make implementing these solutions sound simple, but like all sophisticated technology, looking simple on the front end doesn’t necessarily mean it’s uncomplicated on the back end. As I heard often this week, when it comes to the data used to train Gemini and other large language models, it’s still a case of “garbage in, garbage out,” and that’s even more applicable when it comes to generative AI.

It starts with data. If you don’t have your data house in order, it’s going to be very difficult to get it into shape to train the LLMs on your use case. Kashif Rahamatullah, a Deloitte principal who is in charge of the Google Cloud practice at his firm, was mostly impressed by Google’s announcements this week, but still acknowledged that some companies that lack clean data will have problems implementing generative AI solutions. “These conversations can start with an AI conversation, but that quickly turns into: ‘I need to fix my data, and I need to get it clean, and I need to have it all in one place, or almost one place, before I start getting the true benefit out of generative AI,” Rahamatullah said.

From Google’s perspective, the company has built generative AI tools to more easily help data engineers build data pipelines to connect to data sources inside and outside of the Google ecosystem. “It’s really meant to speed up the data engineering teams, by automating many of the very labor-intensive tasks involved in moving data and getting it ready for these models,” Gerrit Kazmaier, vice president and general manager for database, data analytics and Looker at Google, told TechCrunch.

That should be helpful in connecting and cleaning data, especially in companies that are further along the digital transformation journey. But for those companies like the ones Jain referenced — those that haven’t taken meaningful steps toward digital transformation — it could present more difficulties, even with these tools Google has created.

All of that doesn’t even take into account that AI comes with its own set of challenges beyond pure implementation, whether it’s an app based on an existing model, or especially when trying to build a custom model, says Andy Thurai, an analyst at Constellation Research. “While implementing either solution, companies need to think about governance, liability, security, privacy, ethical and responsible use and compliance of such implementations,” Thurai said. And none of that is trivial.

Executives, IT pros, developers and others who went to GCN this week might have gone looking for what’s coming next from Google Cloud. But if they didn’t go looking for AI, or they are simply not ready as an organization, they may have come away from Sin City a little shell-shocked by Google’s full concentration on AI. It could be a long time before organizations lacking digital sophistication can take full advantage of these technologies, beyond the more-packaged solutions being offered by Google and other vendors.


Software Development in Sri Lanka

Robotic Automations

Indian government's cloud spilled citizens' personal data online for years | TechCrunch


The Indian government has finally resolved a years-long cybersecurity issue that exposed reams of sensitive data about its citizens. A security researcher exclusively told TechCrunch he found at least hundreds of documents containing citizens’ personal information — including Aadhaar numbers, COVID-19 vaccination data, and passport details — spilling online for anyone to access.

At fault was the Indian government’s cloud service, dubbed S3WaaS, which is billed as a “secure and scalable” system for building and hosting Indian government websites.

Security researcher Sourajeet Majumder told TechCrunch that he found a misconfiguration in 2022 that was exposing citizens’ personal information stored on S3WaaS to the open internet. Because the private documents were inadvertently made public, search engines also indexed the documents, allowing anyone to actively search the internet for the sensitive private citizen data.

With support from digital rights organization the Internet Freedom Foundation, Majumder reported the incident at the time to India’s computer emergency response team, known as CERT-In, and the Indian government’s National Informatics Centre.

CERT-In quickly acknowledged the issue, and links containing sensitive files from public search engines were pulled down.

But Majumder said that despite repeated warnings about the data spill, the Indian government cloud service was still exposing some individuals’ personal information as recently as last week.

With evidence of ongoing exposures of private data, Majumder asked TechCrunch for help getting the remaining data secured. Majumder said that some citizens’ sensitive data began spilling online long after he first disclosed the misconfiguration in 2022.

TechCrunch reported some of the exposed data to CERT-In. Majumder confirmed that those files are no longer publicly accessible.

When reached prior to publication, CERT-In did not object to TechCrunch publishing details of the security lapse. Representatives for the National Informatics Centre and S3WaaS did not respond to a request for comment.

Majumder said it was not possible to accurately estimate the true extent of this data leak, but warned that bad actors were purportedly selling the data on a known cybercrime forum before it was shuttered by U.S. authorities. CERT-In would not say if bad actors accessed the exposed data.

The exposed data, Majumder said, potentially puts citizens at risk of identity thefts and scams.

“More than that, when sensitive health information like COVID test results and vaccine records get out, it’s not just our medical privacy that’s compromised — it stirs fears of discrimination and social rejection,” he said.

Majumder noted that this incident should be a “wake-up call for security reforms.”


Software Development in Sri Lanka

Robotic Automations

The market is forcing cloud vendors to relax data egress fees | TechCrunch


In recent months, the big three cloud vendors — Amazon, Microsoft and Google — have relaxed their egress fees, which are a tax of sorts that the cloud companies charge customers to move their data to another vendor. It’s a way to keep existing customers in the fold, but it’s kind of a ham-handed way to do it, and doesn’t exactly foster goodwill.

As a number of factors come into play, like the reality of a multi-cloud world, a stricter regulatory environment and consumer backlash, these companies are beginning to see the error in their ways by easing these fees, albeit with lots of caveats and a bit of friction involved. For example, there are limits to the kind of data you can move, and each requires you to contact the vendor and open a request to get your own data out of the cloud. But it’s a start at least.

This change of heart is really an acknowledgement of changing market dynamics, says John Dinsdale, chief analyst and managing director at Synergy Research, a firm that tracks the cloud infrastructure market. “I think this is a natural progression of the market. As true competition heats up, it would do cloud providers no good to be seen as being overly protectionist,” Dinsdale told TechCrunch.

“Giving customers what they want is just the right business strategy. In the IT world of the last few years, legacy companies that have tried to hang on to the old ways of doing things have not done well,” he said.

It’s also clear that we are moving into a multi-cloud world where it’s more important than ever to remove friction around moving data, says Jake Graham, CEO and co-founder at Bobsled, a startup that helps customers move data between clouds. His role puts him on the front lines of this issue.

“In the original cloud world, the three major cloud vendors were really fighting to try to build what felt like walled gardens, and as long as you built on top of them, everything was great. But going across them was really challenging,” Graham said. “They’re starting to get significant pushback from their enterprise customers, who are saying that there is no world in which a global enterprise is not using multiple platforms.” He says that charging these fees is putting up a significant barrier to moving data, making it difficult to share with customers, and even within divisions inside the same company.

Rudina Seseri, founder and managing partner at Glasswing Ventures, says the shift is partly due to regulatory pressure, but that isn’t the only reason. “At a high level, this emergence of regulation is a pretty simple explanation for the sudden change in behavior,” she said. “However, I think it is also worth pointing out the optics of preemptively making such a language switch, and how Google has used it as a marketing tool against Azure. If these companies see the demise of egress fees as an inevitability, then Google certainly has first-mover advantage towards painting itself as the ‘less restrictive’ cloud and attracting early-stage customers,” she said.

“Metaphorically, the market dynamic is moving away from the stick and back towards the carrot. Cloud customers looking to switch providers will need to be retained through innovative and accessible features now that the punishment of egress fees is being phased out,” Seseri said.

David Linthicum, a longtime cloud consultant, says that while these recent announcements are a pleasant PR move, he warns folks to review their bills carefully because egress fees aren’t the only problem. “This is a nice surprise, but it’s not necessarily consequential. Customers have to consider the costs holistically,” Linthicum told TechCrunch. “In other words, what are we paying for the services we’re leveraging? What are we paying for the networking fees, the egress fees, all the other hidden fees that come along with what people call junk fees that come from the cloud vendors?”

But this may not affect startups as much as larger enterprise customers. “There are more moving parts in a cloud ecosystem than just storage, such as services required for scaling and security, and the largest companies have built tight infrastructures that can be onerous to unwind,” Seseri said. “The experience of startups, however, will certainly improve as providers now must lean further into innovative features and improved customer satisfaction to win long-term loyalty.”

Graham, whose primary business is helping move data, sees his whole business model affected by these fees. He says the recent changes are a small but important step, but he also sees a future where it’s increasingly difficult to determine what is an egress fee and what’s not, which could lead to the ultimate demise of these fees.

That’s because migrations take a long time. It’s not a clean break like, “I was in AWS and now I’m GCP.” It’s a lengthy process over years where data sources that need to communicate are in both clouds for a period of time. At the same time, he says the original cloud vendor is working hard to get the customer to change their minds and come back, and it’s an impossible balancing act for these companies.

“You’re just going to have this battle between the team that is associated with winning back the customer, trying to make the customer happy, and another group that says, wait a second, we already lost this customer. We should be charging them everything. Why are we giving them favorable treatment?”

As data becomes increasingly valuable in the age of AI, being able to move data and put it to work is growing in importance for everyone. Cloud vendors are going to be a lot better off getting in front of this trend instead of throwing up roadblocks to make it more difficult to move data around. Perhaps this is just the start of something much bigger.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber