From Digital Age to Nano Age. WorldWide.

Tag: trick

Robotic Automations

Ransomware gang's new extortion trick? Calling the front desk | TechCrunch


When a hacker called the company that his gang claimed to breach, he felt the same way that most of us feel when calling the front desk: frustrated.

The phone call between the hacker, who claims to represent the ransomware gang DragonForce, and the victim company employee was posted by the ransomware gang on its dark web site in an apparent attempt to put pressure on the company to pay a ransom demand. In reality, the call recording just shows a somewhat hilarious and failed attempt to extort and intimidate a company’s rank-and-file employees.

The recording also shows how ransomware gangs are always looking for different ways to intimidate the companies they hack.

“It’s increasingly common for threat actors to make contact via telephone, and this should be factored into organizations’ response plans. Do we engage or not? Who should engage? You don’t want to be making these decisions while the threat actor is listening to your hold music,” said Brett Callow, a threat analyst at Emsisoft.

In the call, the hacker asks to speak with the “management team.” Instead, two different employees put him on hold until Beth, from HR, answers the call.

“Hi, Beth, how are you doing?” the hacker said.

After a minute in which the two have trouble hearing each other, Beth tells the hacker that she is not familiar with the data breach that the hacker claimed. When the hacker attempts to explain what’s going on, Beth interrupts him and asks: “Now, why would you attack us?”

“Is there a reason why you chose us?” Beth insists.

“No need to interrupt me, OK? I’m just trying to help you,” the hacker responds, growing increasingly frustrated.

The hacker then proceeds to explain to Beth that the company she works for only has eight hours to negotiate before the ransomware gang will release the company’s stolen data.

“It will be published for public access, and it will be used for fraudulent activities and for terrorism by criminals,” the hacker says.

“Oh, OK,” says Beth, apparently nonplussed, and not understanding where the data is going to be.

“So it will be on X?” Beth asks. “So is that Dragonforce.com?”

The hacker then threatens Beth, saying they will start calling the company’s clients, employees and partners. The hacker adds that they have already contacted the media and provided a recording of a previous call with one of her colleagues, which is also on the gang’s dark web site.

“So that includes a conversation with Patricia? Because you know, that’s illegal in Ohio,” Beth says.

“Excuse me?” the hacker responds.

“You can’t do that in Ohio. Did you record Patricia?” Beth continues.

“Ma’am, I am a hacker. I don’t care about the law,” responds the hacker, growing even more frustrated.

Then the hacker tries one more time to convince Beth to negotiate, to no avail.

“I would never negotiate with a terrorist or a hacker as you call yourself,” Beth responds, asking the hacker to confirm a good phone number to call them back.

When the hacker says they “got no phone number,” Beth has had enough.

“Alright, well then I’m just gonna go ahead and end this phone call now,” she says. “I think we spent enough time and energy on this.”

“Well, good luck,” Beth says.

“Thank you, take care,” the hacker says.

The company that was allegedly hacked in this incident, which TechCrunch is not naming as to not help the hackers extort the company, did not respond to a request for comment.

Read more on TechCrunch:


Software Development in Sri Lanka

Robotic Automations

Watch: How Anthropic found a trick to get AI to give you answers it's not supposed to


If you build it, people will try to break it. Sometimes even the people building stuff are the ones breaking it. Such is the case with Anthropic and its latest research which demonstrates an interesting vulnerability in current LLM technology. More or less if you keep at a question, you can break guardrails and wind up with large language models telling you stuff that they are designed not to. Like how to build a bomb.

Of course given progress in open-source AI technology, you can spin up your own LLM locally and just ask it whatever you want, but for more consumer-grade stuff this is an issue worth pondering. What’s fun about AI today is the quick pace it is advancing, and how well — or not — we’re doing as a species to better understand what we’re building.

If you’ll allow me the thought, I wonder if we’re going to see more questions and issues of the type that Anthropic outlines as LLMs and other new AI model types get smarter, and larger. Which is perhaps repeating myself. But the closer we get to more generalized AI intelligence, the more it should resemble a thinking entity, and not a computer that we can program, right? If so, we might have a harder time nailing down edge cases to the point when that work becomes unfeasible? Anyway, let’s talk about what Anthropic recently shared.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber