From Digital Age to Nano Age. WorldWide.

Tag: police

Robotic Automations

Encrypted services Apple, Proton and Wire helped Spanish police identify activist | TechCrunch


As part of an investigation into people involved in the pro-independence movement in Catalonia, the Spanish police obtained information from the encrypted services Wire and Proton, which helped the authorities identify a pseudonymous activist, according to court documents obtained by TechCrunch. Earlier this year, the Spanish police Guardia Civil sent legal requests through Swiss police […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

US, UK police identify and charge Russian leader of LockBit ransomware gang | TechCrunch


The identity of the leader of one of the most infamous ransomware groups in history has finally been revealed.

On Tuesday, a coalition of law enforcement led by the U.K.’s National Crime Agency announced that Russian national, Dmitry Yuryevich Khoroshev, 31, is the person behind the nickname LockBitSupp, the administrator and developer of the LockBit ransomware. The U.S. Department of Justice also announced the indictment of Khoroshev, accusing him of computer crimes, fraud and extortion.

“Today we are going a step further, charging the individual who we allege developed and administered this malicious cyber scheme, which has targeted over 2,000 victims and stolen more than $100 million in ransomware payments,” Attorney General Merrick B. Garland was quoted as saying in the announcement.

According to the DOJ, Khoroshev is from Voronezh, a city in Russia around 300 miles south of Moscow.

“Dmitry Khoroshev conceived, developed, and administered Lockbit, the most prolific ransomware variant and group in the world, enabling himself and his affiliates to wreak havoc and cause billions of dollars in damage to thousands of victims around the globe,” said U.S. Attorney Philip R. Sellinger for the District of New Jersey, where Khoroshev was indicted.

The law enforcement coalition announced the identity of LockBitSupp in press releases, as well as on LockBit’s original dark web site, which the authorities seized earlier this year. On the site, the U.S. Department of State announced a reward of $10 million for information that could help the authorities to arrest and convict Khoroshev.

The U.S. government also announced sanctions against Khoroshev, which effectively bars anyone from transacting with him, such as victims paying a ransom. Sanctioning the people behind ransomware makes it more difficult for them to profit from cyberattacks. Violating sanctions, including paying a sanctioned hacker, can result in heavy fines and prosecution.

LockBit has been active since 2020, and, according to the U.S. cybersecurity agency CISA, the group’s ransomware variant was “the most deployed” in 2022.

On Sunday, the law enforcement coalition restored LockBit’s seized dark web site to publish a list of posts that were intended to tease the latest revelations. In February, authorities announced that they took control of LockBit’s site and had replaced the hackers’ posts with their own posts, which included a press release and other information related to what the coalition called “Operation Cronos.”

Shortly after, LockBit appeared to make a return with a new site and a new list of alleged victims, which was being updated as of Monday, according to a security researcher who tracks the group.

For weeks, LockBit’s leader, known as LockBitSupp, had been vocal and public in an attempt to dismiss the law enforcement operation, and to show that LockBit is still active and targeting victims. In March, LockBitSupp gave an interview to news outlet The Record in which they claimed that Operation Cronos and law enforcement’s actions don’t “affect business in any way.”

“I take this as additional advertising and an opportunity to show everyone the strength of my character. I cannot be intimidated. What doesn’t kill you makes you stronger,” LockBitSupp told The Record.




Software Development in Sri Lanka

Robotic Automations

Police resurrect Lockbit's site and troll the ransomware gang | TechCrunch


An international coalition of police agencies have resurrected the dark web site of the notorious LockBit ransomware gang, which they had seized earlier this year, teasing new revelations about the group.

On Sunday, what was once LockBit’s official darknet site reappeared online with new posts that suggest the authorities are planning to release new information about the hackers in the next 24 hours, as of this writing.

The posts have titles such as “Who is LockBitSupp?”, “What have we learnt”, “More LB hackers exposed”, and “What have we been doing?”

In February, a law enforcement coalition that included the U.K.’s National Crime Agency, the U.S. Federal Bureau of Investigation, as well as forces from Germany, Finland, France, Japan and others announced that they had infiltrated LockBit’s official site. The coalition seized the site and replaced information on it with their own press release and other information in a clear attempt to troll and warn the hackers that the authorities were on to them.

The February operation also included the arrests of two alleged LockBit members in Ukraine and Poland, the takedown of 34 servers across Europe, the U.K., and the U.S., as well as the seizure of more than 200 cryptocurrency wallets belonging to the hackers.

The NCA and the FBI did not immediately respond to a request for comment.

LockBit first emerged in 2019, and has since become one of the most prolific ransomware gangs in the world, netting millions of dollars in ransom payments. The group has proven to be very resilient. Even after February’s takedown, the group has re-emerged with a new dark web leak site, which has been actively updated with new alleged victims.

All the new posts on the seized website, except for one, have a countdown that ends at 9 a.m. Eastern Time on Tuesday, May 7, suggesting that’s when law enforcement will announce the new actions against LockBit. Another post says the site will be shut down in four days.

Since the authorities announced what they called “Operation Cronos” against LockBit in February, the group’s leader, known as LockBitSupp has claimed in an interview that law enforcement has exaggerated its access to the criminal organization as well as the effect of its takedown.

On Sunday, the hacking collective vx-underground wrote on X that they had spoken to LockBit’s administrative staff, who had told them the police were lying.

“I don’t understand why they’re putting on this little show. They’re clearly upset we continue to work,” the staff said, according to vx-underground.

The identity of LockBitSupp is still unknown, although that could change soon. One of the new posts on the seized LockBit site promises to reveal the hacker’s identity on Tuesday. It has to be noted, however, that the previous version of the seized site also appeared to promise to reveal the gang leader’s identity, but eventually did not.




Software Development in Sri Lanka

Robotic Automations

Microsoft bans U.S. police departments from using enterprise AI tool for facial recognition | TechCrunch


Microsoft has changed its policy to ban U.S. police departments from using generative AI for facial recognition through the Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI technologies.

Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used “by or for” police departments for facial recognition in the U.S., including integrations with OpenAI’s text- and speech-analyzing models.

A separate new bullet point covers “any law enforcement globally,” and explicitly bars the use of “real-time facial recognition technology” on mobile cameras, like body cameras and dashcams, to attempt to identify a person in “uncontrolled, in-the-wild” environments.

The changes in terms come a week after Axon, a maker of tech and weapons products for military and law enforcement, announced a new product that leverages OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, like hallucinations (even the best generative AI models today invent facts) and racial biases introduced from the training data (which is especially concerning given that people of color are far more likely to be stopped by police than their white peers).

It’s unclear whether Axon was using GPT-4 via Azure OpenAI Service, and, if so, whether the updated policy was in response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs. We’ve reached out to Axon, Microsoft and OpenAI and will update this post if we hear back.

The new terms leave wiggle room for Microsoft.

The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn’t cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police).

That tracks with Microsoft’s and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.

In January, reporting by Bloomberg revealed that OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities — a departure from the startup’s earlier ban on providing its AI to militaries. Elsewhere, Microsoft has pitched using OpenAI’s image generation tool, DALL-E, to help the Department of Defense (DoD) build software to execute military operations, per The Intercept.

Azure OpenAI Service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features geared toward government agencies including law enforcement. In a blog post, Candice Ling, SVP of Microsoft’s government-focused division Microsoft Federal, pledged that Azure OpenAI Service would be “submitted for additional authorization” to the DoD for workloads supporting DoD missions.

Update: After publication, Microsoft said its original change to the terms of service contained an error, and in fact the ban applies only to facial recognition in the U.S. It is not a blanket ban on police departments using the service. 

 


Software Development in Sri Lanka

Robotic Automations

Microsoft bans U.S. police departments from using enterprise AI tool | TechCrunch


Microsoft has changed its policy to ban U.S. police departments from using generative AI through the Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI technologies.

Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used “by or for” police departments in the U.S., including integrations with OpenAI’s text- and speech-analyzing models.

A separate new bullet point covers “any law enforcement globally,” and explicitly bars the use of “real-time facial recognition technology” on mobile cameras, like body cameras and dashcams, to attempt to identify a person in “uncontrolled, in-the-wild” environments.

The changes in terms come a week after Axon, a maker of tech and weapons products for military and law enforcement, announced a new product that leverages OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, like hallucinations (even the best generative AI models today invent facts) and racial biases introduced from the training data (which is especially concerning given that people of color are far more likely to be stopped by police than their white peers).

It’s unclear whether Axon was using GPT-4 via Azure OpenAI Service, and, if so, whether the updated policy was in response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs. We’ve reached out to Axon, Microsoft and OpenAI and will update this post if we hear back.

The new terms leave wiggle room for Microsoft.

The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn’t cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police).

That tracks with Microsoft’s and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.

In January, reporting by Bloomberg revealed that OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities — a departure from the startup’s earlier ban on providing its AI to militaries. Elsewhere, Microsoft has pitched using OpenAI’s image generation tool, DALL-E, to help the Department of Defense (DoD) build software to execute military operations, per The Intercept.

Azure OpenAI Service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features geared toward government agencies including law enforcement. In a blog post, Candice Ling, SVP of Microsoft’s government-focused division Microsoft Federal, pledged that Azure OpenAI Service would be “submitted for additional authorization” to the DoD for workloads supporting DoD missions.

Microsoft and OpenAI did not immediately return requests for comment.


Software Development in Sri Lanka

Robotic Automations

European police chiefs target E2EE in latest demand for 'lawful access' | TechCrunch


In the latest iteration of the neverending (and always head-scratching) crypto wars, Graeme Biggar, the director general of the UK’s National Crime Agency (NCA), has called on Instagram-owner Meta to rethink its continued rollout of end-to-end encryption (E2EE) — with web users’ privacy and security pulled into the frame yet again.

The call follows a joint declaration by European police chiefs, including the UK’s own, published Sunday — expressing “concern” at how E2EE is being rolled out by the tech industry and calling for platforms to design security systems in such a way that they can still identify illegal activity and sent reports on message content to law enforcement.

In remarks to the BBC today, the NCA chief suggested Meta’s current plan to beef up the security around Instagram users’ private chats by rolling out so-called “zero access” encryption, where only the message sender and recipient can access the content, poses a threat to child safety. The social networking giant also kicked off a long-planned rollout of default E2EE on Facebook Messenger back in December.

“Pass us the information”

Speaking to BBC Radio 4’s Today program on Monday morning, Biggar told interviewer Nick Robinson: “Our responsibility as law enforcement… is to protect the public from organised crime, from serious crime, and we need information to be able to do that.

“What is happening is the tech companies are putting a lot of the information on to end-to-end encrypted. We have no problem with encryption, I’ve got a responsibility to try and protect the public from cybercrime, too — so strong encryption is a good thing — but what we need is for the companies to still be able to pass us the information we need to keep the public safe.”

Currently, as a result of being able to scan message content where E2EE has not been rolled out, Biggar said platforms are sending tens of millions of child-safety related reports a year to police forces around the world — adding a further claim that “on the back of that information we typically safeguard 1,200 children a month and arrest 800 people”. Implication being those reports will dry up if Meta proceeds expanding its use of E2EE to Instagram.

Pointing out that Meta-owned WhatsApp has had the gold standard encryption as its default for years (E2EE was fully implemented across the messaging platform by April 2016), Robinson wondered if this wasn’t a case of the crime agency trying to close the stable door after the horse has bolted?

To which he got no straight answer — just more head-scratching equivocation.

Biggar: “It is a trend. We are not trying to stop encryption. As I said, we completely support encryption and privacy and even end-to-end encryption can be absolutely fine. What we want is the industry to find ways to still provide us with the information that we need kind.”

His intervention follows a joint declaration of around 30 European police chiefs, published Sunday, in which the law enforcement heads urge platforms to adopt unspecified “technical solutions” that they suggest can enable them to offer users robust security and privacy at the same time as maintaining the ability to spot illegal activity and report decrypted content to police forces.

“Companies will not be able to respond effectively to a lawful authority,” the police chiefs suggest, raising concerns that E2EE is being deployed in ways that undermine platforms’ abilities to identify illegal activity themselves and also their ability to send content reports to police.

“As a result, we will simply not be able to keep the public safe,” they claim, adding: “We therefore call on the technology industry to build in security by design, to ensure they maintain the ability to both identify and report harmful and illegal activities, such as child sexual exploitation, and to lawfully and exceptionally act on a lawful authority.”

A similar “lawful access” mandate was adopted on encrypted by the European Council back in a December 2020 resolution.

Client-side scanning?

The European police chiefs declaration does not explain which technologies they want platforms to deploy in order to enable CSAM-scanning and law enforcement to be sent decrypted content. But, most likely, it’s some form of client-side scanning technology they’re lobbying for — such as the system Apple had been poised to roll out in 2021, for detecting child sexual abuse material (CSAM) on users’ own devices, before a privacy backlash forced it to shelve and later quietly drop the plan. (Though Apple did roll out CSAM-scanning for iCloud Photos.)

European Union lawmakers, meanwhile, still have a controversial message-scanning CSAM legislative plan on the table. Privacy and legal expertsincluding the bloc’s own data protection supervisor — have warned the draft law poses an existential threat to democratic freedoms, as well as wreaking havoc with cybersecurity. Critics of the plan also argue it’s a flawed approach to child safeguarding, suggesting it’s likely to cause more harm than good by generating lots of false positives.

Last October parliamentarians pushed back against the Commission proposal, backing a substantially revised approach that aims to limit the scope of so-called CSAM “detection orders”. However the European Council has yet to agree its position. So where the controversial legislation will end up remains to be seen. This month scores of civil society groups and privacy experts warned the proposed “mass surveillance” law remains a threat to E2EE. (In the meanwhile EU lawmakers have agreed to extend a temporary derogation from the bloc’s ePrivacy rules that allows for platforms to carry out voluntary CSAM-scanning — but which the planned law is intended to replace.)

The timing of the joint declaration by European police chiefs suggests it’s intended to amp up pressure on EU lawmakers to stick with the CSAM-scanning plan despite trenchant opposition from the parliament. (Hence they also write: “We call on our democratic governments to put in place frameworks that give us the information we need to keep our publics safe.”)

The EU proposal does not prescribe particularly technologies that platforms must use to scan message content to detect CSAM either but critics warn it’s likely to force adoption of client-side scanning — despite the nascent technology being immature and unproven and simply not ready for mainstream use as they see it, which is another reason they’re so loudly sounding the alarm.

Robinson didn’t ask Biggar if police chiefs are lobbying for client-side scanning specifically but he did ask whether they want Meta to “backdoor” encryption. Again, the answer was fuzzy.

“We wouldn’t call it a backdoor — and exactly how it happens is for industry to determine. They are the experts in this,” he demurred, without specifying exactly what they do want, as if finding a way to circumvent strong encryption is a simple case of techies needing to nerd harder.

A confused Robinson pressed the UK police chief for clarification, pointing out information is either robustly encrypted (and so private) or it’s not. But Biggar danced even further away from the point — arguing “every platform is on a spectrum”, i.e. of information security vs information visibility. “Almost nothing is at the absolutely completely secure end,” he suggested. “Customers don’t want that for usability reasons [such as] their ability to get their data back if they’ve lost a phone.

“What we’re saying is being absolute on either side doesn’t work. Of course we don’t want everything to be absolutely open. But also we don’t want everything to be absolutely closed. So we want the company to find a way of making sure that they can provide security and encryption for the public but still provide us with the information that we need to protect the public.”

Non-existent safety tech

In recent years the UK Home Office has been pushing the notion of so-called “safety tech” that would allow for scanning of E2EE content to detect CSAM without impacting user privacy. However a 2021 “Safety Tech” challenge it ran, in a bid to deliver proof of concepts for such a technology, produced results so poor that the cyber security professor appointed to independently evaluate the projects, the University of Bristol’s Awais Rashid, warned last year that none of the technology developed for the challenge is fit for purpose, writing: “Our evaluation shows that the solutions under consideration will compromise privacy at large and have no built-in safeguards to stop repurposing of such technologies for monitoring any personal communications.”

If technology does exist to allow law enforcement to access E2EE data in the plain without harming users’ privacy, as Biggar appears to be claiming, one very basic question is why can’t police forces explain exactly what they want platforms to implement? (Reminder: Last year reports suggested government ministers had privately acknowledged no such privacy-safe E2EE-scanning technology currently exists.)

TechCrunch contacted Meta for a response to Biggar’s remarks and to the broader joint declaration. In an emailed statement a company spokesperson repeated its defence of expanding access to E2EE, writing: “The overwhelming majority of Brits already rely on apps that use encryption to keep them safe from hackers, fraudsters, and criminals. We don’t think people want us reading their private messages so have spent the last five years developing robust safety measures to prevent, detect and combat abuse while maintaining online security.

“We recently published an updated report setting out these measures, such as restricting people over 19 from messaging teens who don’t follow them and using technology to identify and take action against malicious behaviour. As we roll out end-to-end encryption, we expect to continue providing more reports to law enforcement than our peers due to our industry leading work on keeping people safe.” 

The company has weathered a string of similar calls from a string of UK Home Secretaries over the Conservative governments’ decade+ run. Just last September then Home Secretary, Suella Braverman, warned Meta it must deploy unspecified “safety measures” alongside E2EE — warning the government could use powers in the Online Safety Bill (now Act) to sanction the company if it failed to play ball.

Asked by Robinson if the government could (and should) act if Meta does not change course on E2EE, Biggar both invoked the Online Safety Act and pointed to another (older) piece of legislation, the surveillance-enabling Investigatory Powers Act (IPA), saying: “Government can act and government should act and it has strong powers under the Investigatory Powers Act and also the Online Safety Act to do so.”

Penalties for breaches of the Online Safety Act can be substantial — with Ofcom empowered to issues fines of up to 10% of worldwide annual turnover.

In another concerning step for people’s security and privacy, the government is in the process of beefing up the IPA with more powers targeted at messaging platforms, including a requirement that messaging services clear security features with the Home Office before releasing them.

The controversial plan to further expand IPA’s scope has triggered concern across the UK tech industry — which has suggested citizens’ security and privacy will be put at risk by the additional measures. Last summer Apple also warned it could be forced to shut down mainstream services like iMessage and FaceTime in the UK if the government did not rethink the expansion of surveillance powers.

There’s some irony in the latest law enforcement-led lobbying campaign aimed at derail the onward march of E2EE across mainstream digital services hinging on a plea by police chiefs against binary arguments in favor of privacy — given there has almost certainly never been more signals intelligence available for law enforcement and security services to scoop up to feed their investigations, even factoring in the rise of E2EE. So the idea that improved web security will suddenly spell the end of child safeguarding efforts is itself a distinctly binary claim.

However anyone familiar with the decades long crypto wars won’t be surprised to see double standard pleas being deployed in bid to weaken online security as that’s how this propaganda war has always been waged.


Software Development in Sri Lanka

Robotic Automations

How Ukraine’s cyber police fights back against Russia's hackers | TechCrunch


On February 24, 2022, Russian forces invaded Ukraine. Since then, life in the country has changed for everyone.

For the Ukrainian forces who had to defend their country, for the regular citizens who had to withstand invading forces and constant shelling, and for the Cyberpolice of Ukraine, which had to shift its focus and priorities.

“Our responsibility changed after the full scale war started,” said Yevhenii Panchenko, the chief of division of the Cyberpolice Department of the National Police of Ukraine, during a talk on Tuesday in New York City. “New directives were put under our responsibility.”

During the talk at the Chainalysis LINKS conference, Panchenko said that the Cyberpolice is comprised of around a thousand employees, of which about forty track crypto-related crimes. The Cyberpolice’s responsibility is to combat “all manifestations of cyber crime in cyberspace,” said Panchenko. And after the war started, he said, “we were also responsible for the active struggle against the aggression in cyberspace.”

Panchenko sat down for a wide-ranging interview with TechCrunch on Wednesday, where he spoke about the Cyberpolice’s new responsibilities in wartime Ukraine. That includes tracking what war crimes Russian soldiers are committing in the country, which they sometimes post on social media; monitoring the flow of cryptocurrency funding the war; exposing disinformation campaigns; investigating ransomware attacks; and training citizens on good cybersecurity practices.

The following transcript has been edited for brevity and clarity.

TechCrunch: How did your job and that of the police change after the invasion?

It almost totally changed. Because we still have some regular tasks that we always do, we’re responsible for all the spheres of cyber investigation.

We needed to relocate some of our units in different places, of course, to some difficult organizations because now we need to work separately. And also we added some new tasks and new areas for us of responsibilities when the war started.

From the list of the new tasks that we have, we crave information about Russian soldiers. We never did that. We don’t have any experience before February 2022. And now we try to collect all the evidence that we have because they also adapted and started to hide, like their social media pages that we used for recognizing people who were taking part in the larger invading forces that Russians used to get our cities and kill our people.

Also, we are responsible for identifying and investigating the cases where Russian hackers do attacks against Ukraine. They attack our infrastructure, sometimes DDoS [distributed denial-of-service attacks], sometimes they make defacements, and also try to disrupt our information in general. So, it’s quite a different sphere.

Because we don’t have any cooperation with Russian law enforcement, that’s why it’s not easy to sometimes identify or search information about IP addresses or other things. We need to find new ways to cooperate on how to exchange data with our intelligence services.

Some units are also responsible for defending the critical infrastructure in the cyber sphere. It’s also an important task. And today, many attacks also target critical infrastructure. Not only missiles, but hackers also try to get the data and destroy some resources like electricity, and other things.

When we think about soldiers, we think about real world actions. But are there any crimes that Russian soldiers are committing online?

[Russia] uses social media to sometimes take pictures and publish them on the internet, as it was usual in the first stage of the war. When the war first started, probably for three or four months [Russian soldiers] published everything: videos and photos from the cities that were occupied temporarily. That was evidence that we collected.

And sometimes they also make videos when they shoot in a city, or use tanks or other vehicles with really big guns. There’s some evidence that they don’t choose the target, they just randomly shoot around. It’s the video that we also collected and included in investigations that our office is doing against the Russians.

In other words, looking for evidence of war crimes?

Yes.

How has the ransomware landscape in Ukraine changed after the invasion?

It’s changed because Russia is now not only focused on the money side; their main target is to show citizens and probably some public sector that [Russia] is really effective and strong. If they have any access on a first level, they don’t deep dive, they just destroy the resources and try to deface just to show that they are really strong. They have really effective hackers and groups who are responsible for that. Now, we don’t have so many cases related to ransom, we have many cases related to disruption attacks. It has changed in that way.

Has it been more difficult to distinguish between pro-Russian criminals and Russian government hackers?

Really difficult, because they don’t like to look like a government structure or some units in the military. They always find a really fancy name like, I don’t know, ‘Fancy Bear’ again. They try to hide their real nature.

Contact Us

Do you have information about cyberattacks in Ukraine? From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram, Keybase and Wire @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.

But we see that after the war started, their militaries and intelligence services started to organize groups — maybe they’re not so effective and not so professional as some groups that worked before the war started. But they organize the groups in a massive [scale]. They start from growing new partners, they give them some small tasks, then see if they are effective and truly succeed in a small portion of IT knowledge. Then they move forward and do some new tasks. Now we can see many of the applications they also publish on the internet about the results. Some are not related to what governments or intelligence groups did, but they publish that intelligence. They also use their own media resources to raise the impact of the attack.

What are pro-Russian hacking groups doing these days? What activities are they focused on? You mentioned critical infrastructure defacements; is there anything else that you’re tracking?

It starts from basic attacks like DDoS to destroy communications and try to destroy the channels that we use to communicate. Then, of course, defacements. Also, they collect data. Sometimes they publish that in open sources. And sometimes they probably collect but not use it in disruption, or in a way to show that they already have the access.

Sometimes we know about the situation when we prevent a crime, but also attacks. We have some signs of compromise that were probably used on one government, and then we share with others.

[Russia] also creates many psyops channels. Sometimes the attack did not succeed. And even if they don’t have any evidence, they’ll say “we have access to the system of military structures of Ukraine.”

How are you going after these hackers? Some are not inside the country, and some are inside the country.

That’s the worst thing that we have now, but it’s a situation that could change. We just need to collect all the evidence and also provide investigation as we can. And also, we inform other law enforcement agencies in countries who cooperate with us about the actors who we identify as part of the groups that committed attacks on Ukrainian territory or to our critical infrastructure.

Why is it important? Because if you talk about some regular soldier from the Russian army, he will probably never come to the European Union and other countries. But if we talk about some smart guys who already have a lot of knowledge in offensive hacking, he prefers to move to warmer places and not work from Russia. Because he could be recruited to the army, other things could happen. That’s why it’s so important to collect all evidence and all information about the person, then also prove that he was involved in some attacks and share that with our partners.

Also because you have a long memory, you can wait and maybe identify this hacker, where they are in Russia. You have all the information, and then when they are in Thailand or somewhere, then you can move in on them. You’re not in a rush necessarily?

They attack a lot of our civil infrastructure. That war crime has no time expiration. That’s why it’s so important. We can wait 10 years and then arrest him in Spain or other countries.

Who are the cyber volunteers doing and what is their role?

We don’t have many people today who are volunteers. But they are really smart people from around the world — the United States and the European Union. They also have some knowledge in IT, sometimes in blockchain analysis. They help us to provide analysis against the Russians, collect data about the wallets that they use for fundraising campaigns, and sometimes they also inform us about the new form or new group that the Russians create to coordinate their activities.

It’s important because we can’t cover all the things that are happening. Russia is a really big country, they have many groups, they have many people involved in the war. That type of cooperation with volunteers is really important now, especially because they also have a better knowledge of local languages.

Sometimes we have volunteers who are really close to Russian-speaking countries. That helps us understand what exactly they are doing. There is also a community of IT guys that’s also communicating with our volunteers directly. It’s important and we really like to invite other people to that activity. It’s not illegal or something like that. They just provide the information and they can tell us what they can do.

What about pro-Ukrainian hackers like the Ukraine IT Army. Do you just let them do what they want or are they also potential targets for investigation?

No, we don’t cooperate directly with them.

We have another project that also involves many subscribers. I also talked about it during my presentation: it’s called BRAMA. It’s a gateway and we coordinate and gather people. One thing that we propose is to block and destroy Russian propaganda and psyops on the internet. We have really been effective and have had really big results. We blocked more than 27,000 resources that belong to Russia. They publish their narratives, they publish many of psyops materials. And today, we also added some new functions in our community. We not only fight against propaganda, we also fight against fraud, because a lot of fraud today represented in the territory of Ukraine is also created by the Russians.

They also have a lot of impact with that, because if they launder and take money from our citizens, we could help. And that’s why we include those activities, so we proactively react to stories that we received from our citizens, from our partners about new types of fraud that could be happening on the internet.

And also we provide some training for our citizens about cyber hygiene and cybersecurity. It’s also important today because the Russians hackers not only target the critical infrastructure or government structures, they also try to get some data of our people.

For example, Telegram. Now it’s not a big problem but it’s a new challenge for us, because they first send interesting material, and ask people to communicate or interact with bots. On Telegram, you can create bots. And if you just type twice, they get access to your account, and change the number, change two-factor authentication, and you will lose your account.

Is fraud done to raise funds for the war?

Yes.

Can you tell me more about Russian fundraising? Where are they doing it, and who is giving them money? Are they using the blockchain?

There are some benefits and also disadvantages that crypto could give them. First of all, [Russians] use crypto a lot. They create almost all kinds of wallets. It starts from Bitcoin to Monero. Now they understand that some types of crypto are really dangerous for them because many of the exchanges cooperate and also confiscate the funds that they collect to help their military.

How are you going after this type of fundraising?

If they use crypto, we label the addresses, we make some attribution. It’s our main goal. That’s also the type of activities that our volunteers help us to do. We are really effective at that. But if they use some banks, we only could collect the data and understand who exactly is responsible for that campaign. Sanctions are the only good way to do that.

What is cyber resistance?

Cyber resistance is the big challenge for us. We wanted to play that cyber resistance in cyberspace for our users, for our resources. First of all, if we talk about users, we start from training and also sharing some advice and knowledge with our citizens. The idea is how you could react to the attacks that are expected in the future.

How is the Russian government using crypto after the invasion?

Russia didn’t change everything in crypto. But they adapted because they saw that there were many sanctions. They create new ways to launder money to prevent attribution of the addresses that they used for their infrastructures, and to pay or receive funds. It’s really easy in crypto to create many addresses. Previously they didn’t do that as much, but now they use it often.


Software Development in Sri Lanka

Robotic Automations

'Reverse' searches: The sneaky ways that police tap tech companies for your private data | TechCrunch


U.S. police departments are increasingly relying on a controversial surveillance practice to demand large amounts of users’ data from tech companies, with the aim of identifying criminal suspects.

So-called “reverse” searches allow law enforcement and federal agencies to force big tech companies, like Google, to turn over information from their vast stores of user data. These orders are not unique to Google — any company with access to user data can be compelled to turn it over — but the search giant has become one of the biggest recipients of police demands for access to its databases of users’ information.

For example, authorities can demand that a tech company turn over information about every person who was in a particular place at a certain time based on their phone’s location, or who searched for a specific keyword or query. Thanks to a recently disclosed court order, authorities have shown they are able to scoop up identifiable information on everyone who watched certain YouTube videos.

Reverse searches effectively cast a digital dragnet over a tech company’s store of user data to catch the information that police are looking for.

Civil liberties advocates have argued that these kinds of court-approved orders are overbroad and unconstitutional, as they can also compel companies to turn over information on entirely innocent people with no connection to the alleged crime. Critics fear that these court orders can allow police to prosecute people based on where they go or whatever they search the internet for.

So far, not even the courts can agree on whether these orders are constitutional, setting up a likely legal challenge before the U.S. Supreme Court.

In the meantime, federal investigators are already pushing this controversial legal practice further. In one recent case, prosecutors demanded that Google turn over information on everyone who accessed certain YouTube videos in an effort to track down a suspected money launderer.

A recently unsealed search application filed in a Kentucky federal court last year revealed that prosecutors wanted Google to “provide records and information associated with Google accounts or IP addresses accessing YouTube videos for a one week period, between January 1, 2023, and January 8, 2023.”

The search application said that as part of an undercover transaction, the suspected money launderer shared a YouTube link with investigators, and investigators sent back two more YouTube links. The three videos — which TechCrunch has seen and have nothing to do with money laundering — collectively racked up about 27,000 views at the time of the search application. Still, prosecutors sought an order compelling Google to share information about every person who watched those three YouTube videos during that week, likely in a bid to narrow down the list of individuals to their top suspect, who prosecutors presumed had visited some or all of the three videos.

This particular court order was easier for law enforcement to obtain than a traditional search warrant because it sought access to connection logs about who accessed the videos, rather than the higher-standard search warrant that courts can use to demand that tech companies turn over the contents of someone’s private messages.

The Kentucky federal court approved the search order under seal, blocking its public release for a year. Google was barred from disclosing the demand until last month when the court’s order expired. Forbes first reported on the existence of the court order.

It’s not known if Google complied with the order, and a Google spokesperson declined to say either way when asked by TechCrunch.

Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory, said this was a “perfect example” why civil liberties advocates have long criticized this type of court order for its ability to grant police access to people’s intrusive information.

“The government is essentially dragooning YouTube into serving as a honeypot for the feds to ensnare a criminal suspect by triangulating on who’d viewed the videos in question during a specific time period,” said Pfefferkorn, speaking about the recent order targeting YouTube users. “But by asking for information on everyone who’d viewed any of the three videos, the investigation also sweeps in potentially dozens or hundreds of other people who are under no suspicion of wrongdoing, just like with reverse search warrants for geolocation.”

Demanding the digital haystack

Reverse search court orders and warrants are a problem largely of Google’s own making, in part thanks to the gargantuan amounts of user data that the tech giant has long collected on its users, like browsing histories, web searches and even granular location data. Realizing that tech giants hold huge amounts of users’ location data and search queries, law enforcement began succeeding in convincing courts into granting broader access to tech companies’ databases than just targeting individual users.

A court-authorized search order allows police to demand information from a tech or phone company about a person who investigators believe is involved in a crime that took place or is about to happen. But instead of trying to find their suspect by looking for a needle in a digital haystack, police are increasingly demanding large chunks of the haystack — even if that includes personal information on innocent people — to sift for clues.

Using this same technique as demanding identifying information of anyone who viewed YouTube videos, law enforcement can also demand that Google turn over data that identifies every person who was at a certain place and time, or every user who searched the internet for a specific query.

Geofence warrants, as they are more commonly known, allow police to draw a shape on a map around a crime scene or place of interest and demand huge swaths of location data from Google’s databases on anyone whose phone was in that area at a point in time.

Read more on TechCrunch

Police can also use so-called “keyword search” warrants that can identify every user who searched a keyword or search term within a time frame, typically to find clues about criminal suspects researching their would-be crimes ahead of time.

Both of these warrants can be effective because Google stores the granular location data and search queries of billions of people around the world.

Law enforcement might defend the surveillance-gathering technique for its uncanny ability to catch even the most elusive suspected criminals. But plenty of innocent people have been caught up in these investigative dragnets by mistake — in some cases as criminal suspects — simply by having phone data that appears to place them near the scene of an alleged crime.

Though Google’s practice of collecting as much data as it can on its users makes the company a prime target and a top recipient of reverse search warrants, it’s not the only company subject to these controversial court orders. Any tech company large or small that stores banks of readable user data can be compelled to turn it over to law enforcement. Microsoft, Snap, Uber and Yahoo (which owns TechCrunch) have all received reverse orders for user data.

Some companies choose not to store user data and others scramble the data so it can’t be accessed by anyone other than the user. That prevents companies from turning over access to data that they don’t have or cannot access — especially when laws change from one day to the next, such as when the U.S. Supreme Court overturned the constitutional right to access abortion.

Google, for its part, is putting a slow end to its ability to respond to geofence warrants, specifically by moving where it stores users’ location data. Instead of centralizing enormous amounts of users’ precise location histories on its servers, Google will soon start storing location data directly on users’ devices, so that police must seek the data from the device owner directly. Still, Google has so far left the door open to receiving search orders that seek information on users’ search queries and browsing history.

But as Google and others are finding out the hard way, the only way for companies to avoid turning over customer data is by not having it to begin with.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber