From Digital Age to Nano Age. WorldWide.

Tag: AWS

Robotic Automations

Matt Garman taking over as CEO with AWS at crossroads | TechCrunch


It’s tough to say that a $100 billion business finds itself at a critical juncture, but that’s the case with Amazon Web Services, the cloud arm of Amazon, and the clear market leader in the cloud infrastructure market. On Tuesday, the company announced that CEO Adam Selipsky was stepping down to spend more time with […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

AWS confirms European 'sovereign cloud' to launch in Germany by 2025, plans €7.8B investment over 15 years | TechCrunch


Amazon Web Services (AWS), Amazon’s cloud computing business, has confirmed further details of its European “sovereign cloud” which is designed to enable greater data residency across the region. The company said that the first AWS sovereign cloud region will be in the German state of Brandenburg, and will go live by the end of 2025. […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

AWS CEO Adam Selipsky steps down | TechCrunch


Adam Selipsky is stepping down from his role as CEO of AWS, Amazon PR has confirmed to TechCrunch.  In a memo shared internally by Amazon CEO Andy Jassy and published this morning to the company’s blog, Jassy said that AWS sales chief Matt Garman will be promoted to CEO. Garman previously headed AWS’ EC2 cloud […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Bedrock Studio is Amazon's attempt to simplify generative AI app development | TechCrunch


Amazon is launching a new tool, Bedrock Studio, designed to let organizations experiment with generative AI models, collaborate on those models, and ultimately build generative AI-powered apps.

Available in public preview starting today, the web-based Bedrock Studio — a part of Bedrock, Amazon’s generative AI tooling and hosting platform — provides what Amazon describes in a blog post as a “rapid prototyping environment” for generative AI.

Bedrock Studio guides developers through the steps to evaluate, analyze, fine-tune and share generative AI models from Anthropic, Cohere, Mistral, Meta and other Bedrock partners, as well as test different model settings and guardrails and integrate outside data sources and APIs. Bedrock Studio also offers tools to support collaboration with team members to create and refine generative AI apps, including single sign-on credentials for enterprises using them.

Image Credits: Amazon

Bedrock Studio automatically deploys the relevant Amazon Web Services (AWS) resources as developers request them, Amazon says, and — in the interest of security — apps and data never leave the signed-in AWS account.

“When you create applications in Amazon Bedrock Studio, the corresponding managed resources such as knowledge bases, agents and [more] are automatically deployed in your AWS account,” Amazon principal developer advocate Antje Barth explains in the blog post. “You can use the Amazon Bedrock API to access those resources in downstream applications.”

Less an attempt to reinvent the wheel than streamline existing products and services, Bedrock Studio appears to be a stringing-together of AWS tools that have been around for some time, topped with a sprinkle of corporate governance and compliance capabilities. One imagines it’s all in service of Amazon’s bid to make Bedrock the go-to platform for generative AI app development.

It’s a steep road ahead for Bedrock, which is up against generative AI development platforms from Google Cloud, Microsoft Azure and others. But Amazon telegraphed in a recent earnings report that Bedrock — along with AWS’ other generative AI-related services — are holding their own, with AWS’ generative AI businesses hitting a combined “multi-billion dollar run rate,” according to Amazon CEO Andy Jassy.


Software Development in Sri Lanka

Robotic Automations

Cloud revenue accelerates 21% to $76 billion for the latest earnings cycle | TechCrunch


If you were concerned about slowing cloud infrastructure growth for a time in 2023, you can finally relax: The cloud was back with a vengeance this quarter. The market as a whole was up a healthy $13.5 billion to $76 billion, up 21% over the first quarter in 2023, per Synergy Research.

That’s healthy growth by any measure.

If you’re wondering what’s driving the growth, you probably guessed that it’s related to generative AI and the copious amount of data required to build the underlying models. Whether it’s Microsoft’s close links to OpenAI, Google Cloud making a slew of AI announcements at its recent customer conference or Amazon’s infrastructure managing the data side of the equation, AI is driving lots of business for these vendors.

“There is a symbiotic relationship between the rapid advancement and adoption of AI and the scalable ‘Big 3’ cloud infrastructure providers,” said Rudina Seseri, founder and managing partner at Glasswing Ventures, a firm that invests heavily in AI startups. “AI actually makes the cloud providers more valuable. By creating more capabilities for computing through automation and augmentation within the enterprise, there is a corresponding increased demand for the underlying computational power provided by the Big 3 cloud infrastructure vendors, as evidenced by their immense growth in recent quarters.”

Seseri also sees the cloud vendors making it easier for startups to build on top of their infrastructure in the coming years. “For startups, many depend on the cloud providers, having built atop these immense platforms. I predict we will see immense investment in AI-optimized infrastructure by the major cloud platforms, as it is a key driver behind the sustained growth in cloud computing, which will make it easier to build AI platforms and products on the cloud,” she said.

And these companies are reaping the financial windfall for the newfound interest in this technology. Altimeter partner Jamin Ball reports that those rewards started coming in last quarter, and the ball kept on rolling into this one. Amazon cloud growth had dropped as low as 12% in Q2 and Q3 last year, climbing a bit to 13% in Q4. But the company really kicked it up a notch this quarter with revenue of $25 billion, up 17% over the prior year. That’s a $100 billion run rate, good for 31% market share.

Ball’s numbers indicate that Azure continues to kill it. The company now has 25% market share, good for a $76 billion run rate, up 31% over the previous year. Google is a strong third with 11% market share, up 28% YoY (although it’s important to note that Ball’s number includes Google Workspace, and Synergy’s numbers are only infrastructure and platform numbers).

Image Credits: Jamin Ball

The days of cost cutting in the cloud appear to be over. And although we probably aren’t going back to the heady growth numbers of 2021 and 2022, AI seems to be bringing a new wave of substantial growth to the cloud vendors.

“In terms of annualized run rate, we now have a $300 billion market, which is growing at 21% per year,” Synergy’s chief analyst John Dinsdale said in a statement. “We will not return to the growth rates seen prior to 2022, as the market has become too massive to grow that rapidly, but we will see the market continue to expand substantially. We are forecasting that it will double in size over the next four years.”

As companies’ continuing thirst for AI and the data management related to that grows, it seems that the cloud glory days are back. The growth may not be as gaudy as back in the day, but it’s still pretty darn good for a maturing industry sector, with all signs pointing to solid growth in the coming years.

Image Credits: Synergy Research


Software Development in Sri Lanka

Robotic Automations

Amazon CodeWhisperer is now called Q Developer and is expanding its functions | TechCrunch


Pour one out for CodeWhisperer, Amazon’s AI-powered assistive coding tool. As of today, it’s kaput — sort of.

CodeWhisperer is now Q Developer, a part of Amazon’s Q family of business-oriented generative AI chatbots that also extends to the newly-announced Q Business. Available through AWS, Q Developer helps with some of the tasks developers do in the course of their daily work, like debugging and upgrading apps, troubleshooting, and performing security scans — much like CodeWhisperer did.

In an interview with TechCrunch, Doug Seven, GM and director of AI developer experiences at AWS, implied that CodeWhisperer was a bit of a branding fail. Third-party metrics reflect as much; even with a free tier, CodeWhisperer struggled to match the momentum of chief rival GitHub Copilot, which has over 1.8 million paying individual users and tens of thousands of corporate customers. (Poor early impressions surely didn’t help.)

“CodeWhisperer is where we got started [with code generation], but we really wanted to have a brand — and name — that fit a wider set of use cases,” Seven said. “You can think of Q Developer as the evolution of CodeWhisperer into something that’s much more broad.”

To that end, Q Developer can generate code including SQL, a programming language commonly used to create and manage databases, as well as test that code and assist with transforming and implementing new code ideated from developer queries.

Similar to Copilot, customers can fine-tune Q Developer on their internal codebases to improve the relevancy of the tool’s programming recommendations. (The now-deprecated CodeWhisperer offered this option, too.) And, thanks to a capability called Agents, Q Developer can autonomously perform things like implementing features and documenting and refactoring (i.e. restructuring) code.

Ask a request of Q Developer like “create an ‘add to favorites’ button in my app,” and Q Developer will analyze the app code, generate new code if necessary, create a step-by-step plan, and complete tests of the code before executing the proposed changes. Developers can review and iterate the plan before Q implements it, connecting steps together and applying updates across the necessary files, code blocks and test suites.

“What happens behind the scenes is, Q Developer actually spins up a development environment to work on the code,” Seven said. “So, in the case of feature development, Q Developer takes the entire code repository, creates a branch of that repository, analyzes the repository, does the work that it’s been asked to do and returns those code changes to the developer.”

Image Credits: Amazon

Agents can also automate and manage code upgrading processes, Amazon says, with Java conversions live today (specifically Java 8 and 11 built using Apache Maven to Java version 17) and .NET conversions coming soon. “Q Developer analyzes the code — looking for anything that needs to be upgraded — and makes all those changes before returning it to the developer to review and commit themselves,” Seven added.

To me, Agents sounds a lot like GitHub’s Copilot Workspace, which similarly generates and implements plans for bug fixes and new features in software. And — as with Workspace — I’m not entirely convinced that this more autonomous approach can solve the issues surrounding AI-powered coding assistants.

An analysis of over 150 million lines of code committed to project repos over the past several years by GitClear found that Copilot was resulting in more mistaken code being pushed to codebases. Elsewhere, security researchers have warned that Copilot and similar tools can amplify existing bugs and security issues in software projects.

This isn’t surprising. AI-powered coding assistants seem impressive. But they’re trained on existing code, and their suggestions reflect patterns in other programmers’ work — work that can be seriously flawed. Assistants’ guesses create bugs that are often difficult to spot, especially when developers — who are adopting AI coding assistants in great numbers — defer to the assistants’ judgement.

In less risky territory beyond coding, Q Developer can help manage a company’s cloud infrastructure on AWS — or at least get them the info they need to do the managing themselves.

Q Developer can fulfill requests like “List all of my Lambda functions” and “list my resources residing in other AWS regions.” Currently in preview, the bot can also generate (but not execute) AWS Command Line Interface commands and answer AWS cost-related questions such as “What were the top three highest-cost services in Q1?”

Image Credits: Amazon

So how much do these generative AI conveniences cost?

Q Developer is available for free in the AWS Console, Slack and IDEs such as Visual Studio Code, GitLab Duo and JetBrains — but with limitations. The free version doesn’t allow fine-tuning to custom libraries, packages and APIs, and opts users into a data collection scheme by default. It also imposes monthly caps, including a maximum of 5 Agents tasks (e.g. implementing a feature) per month and 25 queries about AWS account resources per month. (It’s baffling to me that Amazon would impose a cap on questions one can ask about its own services, but here we are.)

The premium version of Q Developer, Q Developer Pro, costs $19 per month per user and adds higher usage limits, tools to manage users and policies, single sign-on and — perhaps most importantly — IP indemnity.

Image Credits: Amazon

In many cases, the models underpinning code-generating services such as Q Developer are trained on code that’s copyrighted or under a restrictive license. Vendors claim that fair use protects them in the event that the models was knowingly or unknowingly developed on copyrighted code — but not everyone agrees. GitHub and OpenAI are being sued in a class action motion that accuses them of violating copyright by allowing Copilot to regurgitate licensed code snippets without providing credit.

Amazon says that it’ll defend Q Developer Pro customers against claims alleging that the service infringes on a third-party’s IP rights so long as they let AWS control their defense and settle “as AWS deems appropriate.”




Software Development in Sri Lanka

Robotic Automations

Amazon wants to host companies' custom generative AI models | TechCrunch


AWS, Amazon’s cloud computing business, wants to be the go-to place companies host and fine-tune their custom generative AI models.

Today, AWS announced the launch of Custom Model Import (in preview), a new feature in Bedrock, AWS’ enterprise-focused suite of generative AI services, that allows organizations to import and access their in-house generative AI models as fully managed APIs.

Companies’ proprietary models, once imported, benefit from the same infrastructure as other generative AI models in Bedrock’s library (e.g. Meta’s Llama 3, Anthropic’s Claude 3), including tools to expand their knowledge, fine-tune them and implement safeguards to mitigate their biases.

“There have been AWS customers that have been fine-tuning or building their own models outside of Bedrock using other tools,” Vasi Philomin, VP of generative AI at AWS, told TechCrunch in an interview. “This Custom Model Import capability allows them to bring their own proprietary models to Bedrock and see them right next to all of the other models that are already on Bedrock — and use them with all of the workflows that are also already on Bedrock, as well.”

Importing custom models

According to a recent poll from Cnvrg, Intel’s AI-focused subsidiary, the majority of enterprises are approaching generative AI by building their own models and refining them to their applications. Those same enterprises say that they see infrastructure, including cloud compute infrastructure, as their greatest barrier to deployment, per the poll.

With Custom Model Import, AWS aims to rush in to fill the need while maintaining pace with cloud rivals. (Amazon CEO Andy Jassy foreshadowed as much in his recent annual letter to shareholders.)

For some time, Vertex AI, Google’s analog to Bedrock, has allowed customers to upload generative AI models, tailor them and serve them through APIs. Databricks, too, has long provided toolsets to host and tweak custom models, including its own recently released DBRX.

Asked what sets Custom Model Import apart, Philomin asserted that it — and by extension Bedrock — offer a wider breadth and depth of model customization options than the competition, adding that “tens of thousands” of customers today are using Bedrock.

“Number one, Bedrock provides several ways for customers to deal with serving models,” Philomin said. “Number two, we have a whole bunch of workflows around these models — and now customers’ can stand right next to all of the other models that we have already available. A key thing that most people like about this is the ability to be able to experiment across multiple different models using the same workflows, and then actually take them to production from the same place.”

So what are the alluded-to model customization options?

Philomin points to Guardrails, which lets Bedrock users configure thresholds to filter — or at least attempt to filter — models’ outputs for things like hate speech, violence and private personal or corporate information. (Generative AI models are notorious for going off the rails in problematic ways, including leaking sensitive info; AWS’ have been no exception.) He also highlighted Model Evaluation, a Bedrock tool customers can use to test how well a model — or several — perform across a given set of criteria.

Both Guardrails and Model Evaluation are now generally available following a several-months-long preview.

I feel compelled to note here that Custom Model Import only supports three model architectures at the moment — Hugging Face’s Flan-T5, Meta’s Llama and Mistral’s models — and that Vertex AI and other Bedrock-rivaling services, including Microsoft’s AI development tools on Azure, offer more or less comparable safety and evaluation features (see Azure AI Content Safety, model evaluation in Vertex and so on).

What is unique to Bedrock, though, are AWS’ Titan family of generative AI models. And — coinciding with the release of Custom Model Import — there’s several noteworthy developments on that front.

Upgraded Titan models

Titan Image Generator, AWS’ text-to-image model, is now generally available after launching in preview last November. As before, Titan Image Generator can create new images given a text description or customize existing images, for example swapping out an image background while retaining the subjects in the image.

Compared to the preview version, Titan Image Generator in GA can generate images with more “creativity,” said Philomin, without going into detail. (Your guess as to what that means is as good as mine.)

I asked Philomin if he had any more details to share about how Titan Image Generator was trained.

At the model’s debut last November, AWS was vague about which data, exactly, it used in training Titan Image Generator. Few vendors readily reveal such information; they see training data as a competitive advantage and thus keep it and info relating to it close to the chest.

Training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much. Several cases making their way through the courts reject vendors’ fair use defenses, arguing that text-to-image tools replicate artists’ styles without the artists’ explicit permission and allow users to generate new works resembling artists’ originals for which artists receive no payment.

Philomin would only tell me that AWS uses a combination of first-party and licensed data.

“We have a combination of proprietary data sources, but also we license a lot of data,” he said. “We actually pay copyright owners licensing fees in order to be able to use their data, and we do have contracts with several of them.”

It’s more detail than from November. But I have a feeling that Philomin’s answer won’t satisfy everyone, particularly the content creators and AI ethicists arguing for greater transparency where it concerns generative AI model training.

In lieu of transparency, AWS says it’ll continue to offer an indemnification policy that covers customers in the event a Titan model like Titan Image Generator regurgitates (i.e. spits out a mirror copy of) a potentially copyrighted training example. (Several rivals, including Microsoft and Google, offer similar policies covering their image generation models.)

To address another pressing ethical threat — deepfakes — AWS says that images created with Titan Image Generator will, as during the preview, come with a “tamper-resistant” invisible watermark. Philomin says that the watermark has been made more resistant in the GA release to compression and other image edits and manipulations.

Segueing into less controversial territory, I asked Philomin whether AWS — like Google, OpenAI and others — is exploring video generation given the excitement around (and investment in) the tech. Philomin didn’t say that AWS wasn’t… but he wouldn’t hint at any more than that.

“Obviously, we’re constantly looking to see what new capabilities customers want to have, and video generation definitely comes up in conversations with customers,” Philomin said. “I’d ask you to stay tuned.”

In one last piece of Titan-related news, AWS released the second generation of its Titan Embeddings model, Titan Text Embeddings V2. Titan Text Embeddings V2 converts text to numerical representations called embeddings to power search and personalization applications. So did the first-generation Embeddings model — but AWS claims that Titan Text Embeddings V2 is overall more efficient, cost-effective and accurate.

“What the Embeddings V2 model does is reduce the overall storage [necessary to use the model] by up to four times while retaining 97% of the accuracy,” Philomin claimed, “outperforming other models that are comparable.”

We’ll see if real-world testing bears that out.


Software Development in Sri Lanka

Robotic Automations

Why AWS, Google and Oracle are backing the Valkey Redis fork | TechCrunch


The Linux Foundation last week announced that it will host Valkey, a fork of the Redis in-memory data store. Valkey is backed by Amazon Web Services, Google Cloud, Oracle, Ericsson and Snap.

AWS and Google Cloud rarely back an open source fork together. Yet, when Redis Labs switched Redis away from the permissive three-clause BSD license on March 20 and adopted the more restrictive Server Side Public License (SSPL), a fork was always one of the most likely outcomes. At the time of the license change, Redis Labs CEO Rowan Trollope said he “wouldn’t be surprised if Amazon sponsors a fork,” as the new license requires commercial agreements to offer Redis-as-a-service, making it incompatible with the standard definition of “open source.”

It’s worth taking a few steps back to look at how we got to this point. Redis, after all, is among the most popular data stores and at the core of many large commercial and open source deployments.

A brief history of Redis

Throughout its lifetime, Redis has actually seen a few licensing disputes. Redis founder Salvatore Sanfilippo launched the project in 2009 under the BSD license, partly because he wanted to be able to create a commercial fork at some point and also because “the BSD [license] allows for many branches to compete, with different licensing and development ideas,” he said in a recent Hacker News comment.

After Redis quickly gained popularity, Garantia became the first major Redis service provider. Garantia rebranded to RedisDB in 2013, and Sanfilippo and the community pushed back. After some time, Garantia eventually changed its name to Redis Labs and then, in 2021, to Redis.

Sanfilippo joined Redis Labs in 2015 and later transferred his IP to Redis Labs/Redis, before stepping down from the company in 2020. That was only a couple of years after Redis changed how it licenses its Redis Modules, which include visualization tools, a client SDK and more. For those modules, Redis first went with the Apache License with the added Commons Clause that restricts others from selling and hosting these modules. At the time, Redis said that despite this change for the modules, “the license for open-source Redis was never changed. It is BSD and will always remain BSD.” That commitment lasted until a few weeks ago.

Redis’ Trollope reiterated in a statement what he told me when he first announced these changes, emphasizing how the large cloud vendors profited from the open source version and are free to enter a commercial agreement with Redis.

“The major cloud service providers have all benefited commercially from the Redis open-source project so it’s not surprising that they are launching a fork within a foundation,” he wrote. “Our licensing change opened the door for CSPs to establish fair licensing agreements with Redis Inc. Microsoft has already come to an agreement, and we’re happy and open to creating similar relationships with AWS and GCP. We remain focused on our role as stewards of the Redis project, and our mission of investing in the Redis source available product, the ecosystem, the developer experience, and serving our customers. Innovation has been and always will be the differentiating factor between the success of Redis and any alternative solution.”

Cloud vendors backed Valkey

The current reality, however, is that the large cloud vendors, with the notable exception of Microsoft, quickly rallied behind Valkey. This fork originated at AWS, where longtime Redis maintainer Madelyn Olson initially started the project in her own GitHub account. Olson told me that when the news broke, a lot of the current Redis maintainers quickly decided that it was time to move on. “When the news broke, everyone was just like, ‘Well, we’re not going to go contribute to this new license,’ and so as soon as I talked to everyone, ‘Hey, I have this fork — we’re trying to keep the old group together,’” she said, “pretty much everyone was like, ‘yeah, I’m immediately on board.”

The original Redis private channel included five maintainers: three from Redis, Olson and Alibaba’s  Zhao Zhao, as well as a small group of committers who also immediately signed on to what is now Valkey. The maintainers from Redis unsurprisingly did not sign on, but as David Nalley, AWS’s director for open source strategy and marketing, told me, the Valkey community would welcome them with open arms.

Olson noted that she always knew that this change was a possibility and well within the rights of the BSD license. “I’m more just disappointed than anything else. [Redis] had been a good steward in the past, and I think the community is kind of disappointed in the change.”

Nalley noted that “from an AWS perspective, it probably would not have been the choice that we wanted to see out of Redis Inc.” But he also acknowledged that Redis is well within its rights to make this change. When asked whether AWS had considered buying a license from Redis, he gave a diplomatic answer and noted that AWS “considered a lot of things” and that nothing was off the table in the team’s decision making.

“It’s certainly their prerogative to make such a decision,” he said. “While we have, as a result, made some other decisions about where we’re going to focus our energy and our time, Redis remains an important partner and customer, and we share a large number of customers between us. And so we hope they are successful. But from an open source perspective, we’re now invested in ensuring the success of Valkey.”

It’s not often that a fork comes together this quickly and is able to gather the support of this many companies under the auspice of the Linux Foundation (LF). That’s something that previous Redis forks like KeyDB didn’t have going for them. But as it turns out, some of this was also fortuitous timing. Redis’s announcement came right in the middle of the European version of the Cloud Native Computing Foundation’s KubeCon conference, which was held in Paris this year. There, Nalley met up with the LF’s executive director, Jim Zemlin.

“It ruined KubeCon for me, because suddenly, I ended up in a lot of conversations about how we respond,” he said. “[Zemlin] had some concerns and volunteered the Linux Foundation as a potential home. So we went through the process of introducing Madelyn [Olson] and the rest of the maintainers to the Linux Foundation, just to see if they thought that it was going to be a compatible move.”

What’s next?

The Valkey team is working on getting a compatibility release out that provides current Redis users with a transition path. The community is also working on an improved shared clustering system, improved multi-threaded performance and more.

With all of this, it’s not likely that Redis and Valkey will stay aligned in their capabilities for long, and Valkey may not remain a drop-in Redis replacement in the long run. One area Redis (the company) is investing in is moving beyond in-memory to also using flash storage, with RAM as a large, high-performance cache. That’s why Redis recently acquired Speedb. Olson noted that there are no concrete plans for similar capabilities in Valkey yet, but didn’t rule it out either.

“There is a lot of excitement right now,” Olson said. “I think previously we’ve been a little technologically conservative and trying to make sure we don’t break stuff. Whereas now, I think there’s a lot of interest in building a lot of new things. We still want to make sure we don’t break things but there’s a lot more interest in updating technologies and trying to make everything faster, more performant, more memory dense. […] I think that’s sort of what happens when a changing of the guard happens because a bunch of previous maintainers are now basically no longer there.”


Software Development in Sri Lanka

Robotic Automations

AWS unveils new service for cloud-based rendering projects | TechCrunch


On Tuesday Amazon launched a new service called Deadline Cloud that lets customers set up, deploy and scale up graphics and visual effects rendering pipelines on AWS cloud infrastructure. The new service, which is geared toward the media and entertainment industry, was timed for the National Association of Broadcasters conference in Las Vegas that kicks off later this month.

Using Deadline Cloud, customers in media and entertainment as well as architecture and engineering can use AWS compute to render content for TV shows, movies, ads, video games and digital blueprints, said AWS GM of creative tools Antony Passemard.

In other words, AWS is betting on increasing demand for tools that help media, entertainment and other execs navigate the ins and outs of cloud-based rendering.

“We’re at a tipping point in the industry where demand for rendering quality VFX and the amount of content created using generative AI are outpacing customers’ [compute] capacity,” Passemard added in a blog post. “AWS Deadline Cloud meets any customer’s rendering requirements by providing a scalable render farm without having to manage the underlying infrastructure.”

A startup wizard in Deadline Cloud walks customers through the process of setting up a render farm, including providing the size and duration of their projects to determine instance type and configuring permissions. Deadline Cloud then provisions Amazon Elastic Compute Cloud instances and manages the network and compute infrastructure. And — for customers with on-premises compute — Deadline Cloud integrates with this compute and uses it to execute rendering jobs.

Deadline Cloud’s dashboard provides a view to analyze logs, preview in-progress render jobs and review and control costs. With Deadline Cloud, customers can link their own third-party software licenses with the service or leverage usage-based licensing for rendering with existing rendering tools (e.g., Autodesk Maya, Foundry Nuke, and SideFX Houdini) and engines.

“[With Deadline Cloud,] creative teams can embrace the velocity of content pipelines and respond quickly to opportunities to accept more projects, while meeting tight deadlines and delivering high-quality content,” Passemard continued.

Deadline Cloud is now available in the U.S. East (Ohio, North Virginia), U.S. West (Oregon), Asia Pacific (Singapore, Sydney, Tokyo), and Europe (Frankfurt, Ireland) AWS server regions.

Cloud-based rendering is nothing new. Back in 2015, Google made a splash in the space with the acquisition of Zync, whose technology has since been used to launch Google Cloud–powered visual effects tooling in partnership with Sony’s animation studio, Sony Pictures Imageworks. Elsewhere, platforms like Arch and Chaos Cloud have provided on-demand cloud-based VFX infrastructure for years.

But the COVID-19 pandemic accelerated VFX workloads’ move to the cloud as the cost of maintaining hardware — and the space to store it — increased while work simultaneously dwindled, the result of work-from-home mandates and health-related shutdowns of productions. As Passemard alluded to, the rise of generative AI has fueled the demand for rendering hardware, too, and has led to the creation of entirely new cloud-based, GPU-accelerated providers.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber