From Digital Age to Nano Age. WorldWide.

Tag: tells

Robotic Automations

UnitedHealth CEO tells Senate all systems now have multi-factor authentication after hack | TechCrunch


UnitedHealth Group Chief Executive Officer Andrew Witty told senators on Wednesday that the company has now enabled multi-factor authentication on all the company’s systems exposed to the internet in response to the recent cyberattack against its subsidiary Change Healthcare.

The lack of multi-factor authentication was at the center of the ransomware attack that hit Change Healthcare earlier this year, which impacted pharmacies, hospitals and doctors’ offices across the United States. Multi-factor authentication, or MFA, is a basic cybersecurity mechanism that prevents hackers from breaking into accounts or systems with a stolen password by requiring a second code to log in.

In a written statement submitted on Tuesday ahead of two congressional hearings, Witty revealed that hackers used a set of stolen credentials to access a Change Healthcare server, which he said was not protected by multi-factor authentication. After breaking into that server, the hackers were then able to move into other company systems to exfiltrate data, and later encrypt it with ransomware, Witty said in the statement.

Today, during the first of those two hearings, Witty faced questions about the cyberattack from senators on the Finance Committee. In response to questions by Sen. Ron Wyden, Witty said that “as of today, across the whole of UHG, all of our external-facing systems have got multi-factor authentication enabled.”

“We have an enforced policy across the organization to have multi-factor authentication on all of our external systems, which is in place,” Witty said.

When asked to confirm Witty’s statement, UnitedHealth Group’s spokesperson Anthony ​​Marusic told TechCrunch that Witty “was very clear with his statement.”

Witty blamed the fact that Change Healthcare’s systems had not yet been upgraded after UnitedHealth Group acquired the company in 2022.

“We were in the process of upgrading the technology that we had acquired. But within there, there was a server, which I’m incredibly frustrated to tell you, was not protected by MFA,” Witty said. “That was the server through which the cybercriminals were able to get into Change. And then they led off a ransomware attack, if you will, which encrypted and froze large parts of the system.”

Contact Us

Do you have more information about the Change Healthcare ransomware attack? From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram, Keybase and Wire @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.

Witty also said that the company is still working on understanding exactly why that server did not have multi-factor authentication enabled.

Wyden criticized the company’s failure to upgrade the server. “We heard from your people that you had a policy, but you all weren’t carrying it out. And that’s why we have the problem,” Wyden said.

UnitedHealth has yet to notify people that were impacted by the cyberattack, Witty said during the hearing, arguing that the company still needs to determine the extent of the hack and the stolen information. As of now, the company has only said that hackers stole personal and health information data of “a substantial proportion of people in America.”

Last month, UnitedHealth said that it paid $22 million to the hackers who broke into the company’s systems. Witty confirmed that payment during the Senate hearing.

On Tuesday afternoon, Witty also appeared in a House Energy and Commerce committee, where he revealed that “maybe a third” of Americans had their personal health information stolen by the hackers


Software Development in Sri Lanka

Robotic Automations

Women in AI: Kristine Gloria tells women to enter the field and 'follow your curiosity' | TechCrunch


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Kristine Gloria formerly led the Aspen Institute’s Emergent and Intelligent Technologies Initiative — the Aspen Institute being the D.C.-headquartered think tank focused on values-based leadership and policy expertise. Gloria holds a PhD in cognitive science and a Master’s in media studies, and her past work includes research at MIT’s Internet Policy Research Initiative, the San Francisco-based Startup Policy Lab and the Center for Society, Technology and Policy at UC Berkeley.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

To be frank, I definitely didn’t start my career in pursuit of being in AI. First, I was really interested in understanding the intersection of technology and public policy. At the time, I was working on my Master’s in media studies, exploring ideas around remix culture and intellectual property. I was living and working in D.C. as an Archer Fellow for the New America Foundation. One day, I distinctly remember sitting in a room filled with public policymakers and politicians who were throwing around terms that didn’t quite fit their actual technical definitions. It was shortly after this meeting that I realized that in order to move the needle on public policy, I needed the credentials. I went back to school, earning my doctorate in cognitive science with a concentration on semantic technologies and online consumer privacy. I was very fortunate to have found a mentor and advisor and lab that encouraged a cross-disciplinary understanding of how technology is designed and built. So, I sharpened my technical skills alongside developing a more critical viewpoint on the many ways tech intersects our lives. In my role as the director of AI at the Aspen Institute, I then had the privilege to ideate, engage and collaborate with some of the leading thinkers in AI. And I always found myself gravitating towards those who took the time to deeply question if and how AI would impact our day-to-day lives.

Over the years, I’ve led various AI initiatives and one of the most meaningful is just getting started. Now, as a founding team member and director of strategic partnerships and innovation at a new nonprofit, Young Futures, I’m excited to weave in this type of thinking to achieve our mission of making the digital world an easier place to grow up. Specifically, as generative AI becomes table stakes and as new technologies come online, it’s both urgent and critical that we help preteens, teens and their support units navigate this vast digital wilderness together.

What work are you most proud of (in the AI field)?

I’m most proud of two initiatives. First is my work related to surfacing the tensions, pitfalls and effects of AI on marginalized communities. Published in 2021, “Power and Progress in Algorithmic Bias” articulates months of stakeholder engagement and research around this issue. In the report, we posit one of my all-time favorite questions: “How can we (data and algorithmic operators) recast our own models to forecast for a different future, one that centers around the needs of the most vulnerable?” Safiya Noble is the original author of that question, and it’s a constant consideration throughout my work. The second most important initiative recently came from my time as head of Data at Blue Fever, a company on the mission to improve youth well-being in a judgment-free and inclusive online space. Specifically, I led the design and development of Blue, the first AI emotional support companion. I learned a lot in this process. Most saliently, I gained a profound new appreciation for the impact a virtual companion can have on someone who’s struggling or who may not have the support systems in place. Blue was designed and built to bring its “big-sibling energy” to help guide users to reflect on their mental and emotional needs.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

Unfortunately, the challenges are real and still very current. I’ve experienced my fair share of disbelief in my skills and experience among all types of colleagues in the space. But, for every single one of those negative challenges, I can point to an example of a male colleague being my fiercest cheerleader. It’s a tough environment, and I hold on to these examples to help manage. I also think that so much has changed in this space even in the last five years. The necessary skill sets and professional experiences that qualify as part of “AI” are not strictly computer science-focused anymore.

What advice would you give to women seeking to enter the AI field?

Enter in and follow your curiosity. This space is in constant motion, and the most interesting (and likely most productive) pursuit is to continuously be critically optimistic about the field itself.

What are some of the most pressing issues facing AI as it evolves?

I actually think some of the most pressing issues facing AI are the same issues we’ve not quite gotten right since the web was first introduced. These are issues around agency, autonomy, privacy, fairness, equity and so on. These are core to how we situate ourselves amongst the machines. Yes, AI can make it vastly more complicated — but so can socio-political shifts.

What are some issues AI users should be aware of?

AI users should be aware of how these systems complicate or enhance their own agency and autonomy. In addition, as the discourse around how technology, and particularly AI, may impact our well-being, it’s important to remember there are tried-and-true tools to manage more negative outcomes.

What is the best way to responsibly build AI?

A responsible build of AI is more than just the code. A truly responsible build takes into account the design, governance, policies and business model. All drive the other, and we will continue to fall short if we only strive to address one part of the build.

How can investors better push for responsible AI

One specific task, which I admire Mozilla Ventures for requiring in its diligence, is an AI model card. Developed by Timnit Gebru and others, this practice of creating model cards enables teams — like funders — to evaluate the risks and safety issues of AI models used in a system. Also, related to the above, investors should holistically evaluate the system in its capacity and ability to be built responsibly. For example, if you have trust and safety features in the build or a model card published, but your revenue model exploits vulnerable population data, then there’s misalignment to your intent as an investor. I do think you can build responsibly and still be profitable. Lastly, I would love to see more collaborative funding opportunities among investors. In the realm of well-being and mental health, the solutions will be varied and vast as no person is the same and no one solution can solve for all. Collective action among investors who are interested in solving the problem would be a welcome addition.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber