Category Archives: LAWMAKERS

Australia likely to get its own GDPR

Everyone in the Australian cybersecurity ecosystem has a role to play to ensure the security of the nation, according to Nationals Senator Bridget McKenzie.

The mandatory data breach notifications laws coming into effect in Australia next year will be followed by other laws to ensure everyone in the digital ecosystem — including government divisions, large corporates, small to medium-size enterprises (SMEs), and consumers — are playing their role in keeping Australia “cyber secure”, according to Senator Bridget McKenzie.

McKenzie, who is the chair of the Foreign Affairs, Defence, and Trade Legislation Committee, likened cyber breaches to the “system of disease in the pre-industrial revolution that just swept through”.

“Cyber breaches have the capacity to wipe out industries, wipe out systems, wipe out communities, if every member of that community or that cyber ecosystem isn’t following best practice when it comes to keeping their information secure,” McKenzie told ZDNet at the Australian Computer Society’s Reimagination Thought Leaders’ Summit.

“It’s not just defence’s job or ASIO’s or DSTO’s or the government’s indeed, but every SME and private homeowner needs to have an eye for cybersecurity, making sure their data’s safe.”

McKenzie said mandatory data breach notifications laws, set to come into effect next year, is a step towards keeping organisations alert and accountable, with other laws expected to be introduced in Australia in the upcoming years, possibly similar to those coming into effect next year in the European Union.

The European Union’s (EU) General Data Protection Regulation (GDPR) will require organisations around the world that hold data belonging to individuals from within the EU to provide a high level of protection and explicitly know where every piece of data is stored.

Organisations that fail to comply with the regulation requirements could be fined up to €20 million, or, in the case of an undertaking, up to 4 percent of the total worldwide annual turnover of the preceding financial year — whichever is higher.

“No longer can you say, ‘Oh I’ll leave it to someone else because the flow-on effects, the interconnectedness, the Internet of Things, is such that if one member of that web, if you like, has a security breach, it has flow-on effects for everybody involved,” McKenzie said.

Additionally, Australians need to have the confidence that they can share private information such as their health details and not have it end up in the public sphere, otherwise the nation will not be able to experience the full benefits of technology, McKenzie said.

Shadow Minister for the Digital Economy Ed Husic said, however, that the government has a long way to go in building that confidence, given 50,000 Australians have been affected by a government data breach that occurred in October. He noted that the breach was not a technological error, but a human error.

“How do we build consumer or citizen confidence about protection of privacy?” Husic said. “50,000 people were affected by a data breach across government, releasing details of passwords and credit cards. It’s not all tech related … people often blame tech for this. It’s people and the way that they use data and it’ll be interesting to see the details that come out on this in the next few days.”

“This data breach occurred back in October, no public explanation of it, no detail about what was known, what was being done to fix it. If we want people to be confident that data is being used well by government, then the government’s got a long way to go to build that confidence.”

Husic added that the government needs to lead by example; it should be notifying the public about data breaches if it wants businesses to do the same.

“[The government’s] got to do some things itself. And you can’t lecture business about getting focused on cybersecurity if you’re losing your own moral authority … because you’re not looking after data within your own batch,” he said.

McKenzie believes in Australia’s growing status as a cybersecurity hub, saying that the nation is equipped with the right expertise in this area. She added that Australia is in the process of creating a strong cybersecurity industry capable of exporting.

“Our law enforcement and intelligence agencies are world-class. We’re also part of Five Eyes, which means we have a lot of access to information and technology and collaboration opportunities,” she said. “We lead the world in quantum computing … and it [has the] potential to contribute further to security of data and security of communications particularly in the intelligence and defence spheres.

“We’ve really got some technical expertise, but also I think a richness around governance frameworks and excellence in regulatory frameworks that can also assist other governments and other organisations worldwide to understand best practices in the area.”

In September, Ambassador for Cyber Affairs Dr Tobias Feakin communicated a similar sentiment, saying Australia has an international standing in cybersecurity, and brings “key qualities” to the table.

Australia has also played a role in the creation of international peacetime norms for cyberspace, including chairing the first United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (UN GGE) in 2013, and helping develop the 11 international norms agreed to in subsequent UN GGE meetings.

“We have regional knowledge beyond most. We have a trusted diplomatic brand, and that’s something that we intend to capitalise on. We have strategic and economic interests in the region. And we have long-standing development partnerships across the region already,” Feakin said at the second annual SINET61 conference in Sydney.

“We need to capitalise on those, make the most of them. Not just for us as a government, [and] for regional partners as well, but also for our private sector … We see this issue as central to our economic future,” he said.

“It’s only this year that it’s just reached the point, of tipping over, to 50 percent of all internet users living in the Asia-Pacific. But really, still, there’s huge economic growth to unravel there, because still 60 percent of all households don’t have internet coverage.”

Last month, launching the International Cyber Engagement Strategy, Foreign Minister Julie Bishop said that for the purpose of national security, cyberspace cannot be an ungoverned space.

“Just as we have international rules that guide how states behave, and how states should behave towards each other, the international rules-based order that’s been in place for about 70 years, so too must states acknowledge that activities in cyberspace are governed by the same set of rules as military and security activities in traditional domains,” Bishop said in October.

“The 2016 US presidential election focused the world’s attention on the potential for cyber operations to interfere with democratic processes. This cannot be allowed to continue. It strikes at the very heart of the sovereignty of nations.”

According to the International Cyber Engagement Strategy, Australia will develop an international “architecture for cooperation” including mechanisms to respond to unacceptable behaviour in cyberspace in a timely manner.

“Australia’s responses to malicious cyber activity could comprise law enforcement or diplomatic, economic, or military measures as appropriate for the circumstances. This could include, but is not restricted to, offensive cyber capabilities that disrupt, deny, or degrade the computers or computer networks of adversaries,” the strategy states.

The strategy also implies that the nation has the capability to identify the source of cyber attacks.

“Depending on the seriousness and nature of an incident, Australia has the capability to attribute malicious cyber activity in a timely manner to several levels of granularity — ranging from the broad category of adversary through to specific states and individuals,” the strategy states.

In September, the federal government pledged AU$50 million over seven years for the cybersecurity cooperative research centre (CRC), with over AU$89 million in further funding to come from 25 industry, research, and government partners.

The cybersecurity CRC will deliver solutions to increase the security of critical infrastructure, the government said at the time, which includes “frameworks, products, and approaches that will service existing and future ICT enterprises across a broad range of platforms and operating systems”.

Assistant Minister for Industry, Innovation and Science Craig Laundy said the activities of the cybersecurity CRC will contribute to the objectives laid out in Australia’s AU$240 million Cyber Security Strategy, which is aimed at defending the nation’s cyber networks from organised criminals and state-sponsored attackers.

Related Coverage

Just one day after its release, iOS 11.1 hacked by security researchers

The bugs were found in Apple’s Safari web browser.

With a physical key, Google says it can protect you from nation-state hackers

When two-factor doesn’t cut it against the most sophisticated adversary, Google thinks it has an answer.

IoT security: Keeping users on their toes means staying on yours

IoT has introduced new vulnerabilities that can put your network at risk. Providing users with ongoing security training — and examples that relate to their work — will help keep your data safe.

Hacking group targets banks with stealthy trojan malware campaign

Stolen credentials are used to launch attacks which include the ability to stream live video of the screens of infected users.

This destructive wiper ransomware was used to hide a stealthy hacking campaign

“ONI” ransomware deployed on hundreds of machines in an effort by attackers to cover tracks of “Night of the Devil” campaign — which exploited leaked-NSA exploits.

www.scamsfakes.com

www.crimefiles.net

Henry Sapiecha

New USA Federal Requirements On Cellphone Surveillance

WASHINGTON (AP) — Federal law enforcement officials will be routinely required to get a search warrant before using secretive and intrusive cellphone-tracking technology under a new Justice Department policy announced Thursday.

The policy represents the first effort to create a uniform legal standard for federal authorities using equipment known as cell-site simulators, which tracks cellphones used by suspects.

It comes amid concerns from privacy groups and lawmakers that the technology, which is now widely used by local police departments, is infringing on privacy rights and is being used without proper accountability.

“The policy is really designed to address our practices, and to really try to promote transparency and consistency and accountability — all while being mindful of the public’s privacy interest,” Deputy Attorney General Sally Yates told reporters in announcing the policy change.

The policy applies only to federal agencies within the Justice Department and not, as some privacy advocates had hoped, to state and local law enforcement whose use of the equipment has stirred particular concern and scrutiny from local judges.

The technology — also known as a Stingray, a suitcase-sized device — can sweep up basic cellphone data from a neighborhood by tricking phones in the area to believe that it’s a cell tower, allowing it to identify unique subscriber numbers. The data is then transmitted to the police, helping them determine the location of a phone without the user even making a call or sending a text message.

The equipment used by the Justice Department does not collect the content of communications.

Even as federal law enforcement officials tout the technology as a vital tool to catch fugitives and kidnapping suspects, privacy groups have raised alarms about the secrecy surrounding its use and the collection of cellphone information of innocent bystanders who happen to be in a particular neighborhood or location.

In creating the new policy the Justice Department was mindful of those concerns and also sought to address inconsistent practices among different federal agencies and offices, Yates said.

“We understand that people have a concern about their private information, and particularly folks who are not the subjects or targets of investigations,” Yates said.

The new policy requires a warrant in most cases, except for emergencies like an immediate national security threat, as well as unspecified “exceptional circumstances.” The warrant applications are to set out how the technology will be used.

In addition, authorities will be required to delete data that’s been collected once they have the information they need, and are expected to provide training to employees.

The policy could act as a blueprint for state and local law enforcement agencies in developing their own regulations. But it’s unclear how broad an impact Thursday’s announcement will have, since it does not directly affect local police agencies unless they’re working alongside federal authorities on a case or relying on their assistance.

Use of the technology has spread widely among local police departments, who have been largely mum about their use of the technology and hesitant to disclose details — often withholding materials or heavily censoring documents that they do provide.

Local departments have faced scrutiny from judges about how they deploy the equipment, though agencies have often insisted that non-disclosure agreements with the FBI limit what they can say.

The FBI has said that while specific capabilities of the equipment are considered sensitive, it did not intend for the agreements to prevent the police from disclosing to a court that the equipment was used in a particular case. Yates said she expected the FBI to revise any such agreements to be more transparent.

The American Civil Liberties Union called the policy a good first step, but expressed disappointment that it did not cover federal agencies outside the Justice Department or local police who use federal funds to purchase the surveillance equipment. It called on the Justice Department to close remaining loopholes, such as the one allowing for warrantless surveillance under undefined “exceptional circumstances.”

“After decades of secrecy in which the government hid this surveillance technology from courts, defense lawyers, and the American public, we are happy to see that the Justice Department is now willing to openly discuss its policies,” ACLU lawyer Nathan Freed Wessler said in a statement.

Nate Cardozo, a staff attorney with the Electronic Frontier Foundation, a privacy group, praised the policy as an important step, though he said he suspected Justice Department attorneys saw “the writing on the wall” and recognized that judges would increasingly begin requiring warrants.

Though the policy does not require local police to follow the lead of federal agencies, “this is going to let the air out of state law enforcement’s argument that a warrant shouldn’t be required.”

“We think that given the power of cell-site simulators and the sort of information that they can collect — not just from the target but from every innocent cellphone user in the area — a warrant based on probable cause is required by the Fourth Amendment,” Cardozo said.

Henry Sapiecha

This Algorithm & Robots Decides Crime Cases Almost As Well As A Judge

A Robotic computer program could help relieve the massive backlogs facing the world’s highest courts

justice-scales-gif image www.crimefiles.net

A computer algorithm took on the work of real human judges and did a pretty good job, predicting the decisions of one of Europe’s highest courts with 79 percent accuracy. The finding suggests artificial intelligence could help the world’s busiest courts work through their massive backlog of cases, even if an algorithm isn’t about to take up a digital gown and gavel and start actually deciding cases.

The AI analyzed cases tried before the European Court of Human Rights, which hears cases from people and groups who claim their civil or political rights have been violated in their home countries. An international team of computer scientists worked with a legal scholar to determine just how well AI could predict the court’s ultimate judgement based on how the written decision described the factual background of the case and the arguments of the parties involved. They found it agreed with the judges’ decision four of five times — and that the underlying facts of the case were by far the best predictor of the outcome of a case, rather than any of the more abstract legal arguments.

“The fact that we can get this accuracy, it means that there are some consistent patterns of violations that lead to overturning the [previous court’s] decision,” University of Pennsylvania computer scientist Daniel Preoţiuc-Pietro told Vocativ.

That suggests the court is typically less concerned with parsing philosophical questions of whether a specific instance is a human rights violation than it is determining how that situation fits into their already defined categories of violations. Preoţiuc-Pietro pointed to the example of people who allege mistreatment in prison as a situation that typically led to decisions in those people’s favor. “That’s definitely more likely for the court to actually accept that the state made a mistake and the people involved were actually justified,” he said.

More U.S. Military Wants Robots That Can Explain Themselves

The AI used what’s known as natural language processing to analyze the cases. This particular method involved looking at the text of a decision as a big bag of words, not worrying about any particular word order or grammar. Instead, the AI looked at what individual words and combinations of two, three, or four words appeared most frequently in the text, regardless of order. The AI then looked at all these combinations, known as N-grams, and clustered them into different overall topics.

The court’s decisions include lengthy sections recapping not only the factual background of the cases but also the original arguments made by the parties in the case. This gave the AI a broad sense of what each text was talking about and gave it the context necessary to predict the outcome of the case, which it did correctly in nearly four out of every five cases.

But that doesn’t mean the researchers are hoping to see AI judges anytime soon.

“We’re not advocating for automating any decisions,” said Preoţiuc-Pietro. “Decisions should still be made by the judges.” Where the AI can make a difference is in helping determining which cases make it to the judges in the first place.

More Artificial Intelligence Writes Extremely Bad Harry Potter Fan Fic

In 2015, the researchers found that nearly 85,000 petitions were submitted to the court, of which just 891 were actually decided upon. All the rest were thrown out as inadmissible, meaning the court couldn’t take them on and the previous decision by a lower court would have to stand. The European Court of Human Rights relies both on individual judges and committees to work through all these cases and figure out which are worth bringing to the actual court’s attention. Last year, that meant the entire court apparatus had to process more than 230 cases every single day, making it a huge challenge just to give each petition the human attention it deserves.

Artificial intelligence, by contrast, could zip through 85,000 petitions and decide which were most likely to be worth the court’s time, based on how similar each petition is to the court’s previous cases. Preoţiuc-Pietro suggested the algorithm could separate the cases into three groups based on the court’s prior history: those the court would likely rule on, those it likely would rule inadmissible, and those in a gray area. Committees could then devote more time to examining the cases already identified as being of uncertain status, rather than having them take valuable time doing all their own categorization.

“These committees are time-limited and beyond that very costly, so they can actually look at just the flagged cases which are more likely to be disputed and analyze them more thoroughly,” said Preoţiuc-Pietro, “while the others they can be sent for just individuals and they don’t need to be scrutinized by more people.”

The goal then wouldn’t be to take the human element out of the law, but instead the complete opposite: The European Court of Human Rights and other bodies like it would have more time to focus more time on its most difficult cases, while the AI would separate out the cases that would likely just get thrown out anyway.

www.crimefiles.net

Russian_Girl_1_728_90

hs-sig-red-on-white

Henry Sapiecha

 

UK refuses to reveal how many lawmakers are under surveillance

UK Home secretary Theresa May did confirm that members of devolved parliaments and the European Parliament are not subject to wiretap protections.

UK home secretary Theresa May speaking on BBC radio image www.intelagencies.com

UK home secretary Theresa May speaking on BBC radio (Image: BBC/Twitter; file photo)

The UK’s home secretary Theresa May has refused to confirm how many fellow lawmakers have had their communications intercepted by British intelligence agencies.

In a brief confrontation in the parliament’s House of Commons on Monday, fellow Conservative Peter Bone MP said May’s refusal to answer was an “indication” that some members of parliament (MPs) have been subject to surveillance by UK intelligence agencies.

The emergency session follows a ruling last week that determined the so-called Wilson Doctrine, a promise made by former prime minister Harold Wilson that said members of parliament won’t have their mail opened or phones tapped by the intelligence agencies without his direct knowledge, was no longer valid.

May said the doctrine “still applies,” but confirmed that devolved members of parliament in Scotland (MSPs), Wales, and Northern Ireland, as well as members of the European Parliament (MEPs), are not protected by the doctrine.

Joanna Cherry MP, a Scottish member of parliament, criticized May’s response, asking why the government thinks the Scottish parliament is “less deserving” of the doctrine’s protection. She added that the home secretary’s “caveated” comments about the doctrine in 2014 suggested the doctrine may have been partly suspended around the time of the Scottish national independence referendum, a national vote that saw Scotland remain as part of the United Kingdom.

Caroline Lucas MP, who brought the case under debate to the Investigatory Powers Tribunal, said lawmakers had been “misled” over the level of protections MPs are afforded under the doctrine.

Doctrine ‘cannot work sensibly’

Until last week, the doctrine was kept in force by every prime minister since Wilson, but was expanded in 2002 when former prime minister Tony Blair said the doctrine applied to “all forms” of communications.

But last week, James Eadie QC told the Investigatory Powers Tribunal (IPT), which hears complaints against the intelligence agencies, that the doctrine “simply cannot work sensibly” in an age of bulk data collection and mass surveillance, and did not have the force or weight of the law.

The IPT said that the UK’s spy agencies MI5, MI6, and GCHQ — the eavesdropping agency whose activities were detailed in an extensive range of documents leaked by whistleblower Edward Snowden — have their own separate policies that do not require for the prime minister to be informed where parliamentary communications were collected.

MPs were quick to respond with anger, amid concerns that emails sent to and from parliamentary offices may have been collected or spied on.

In a letter to the prime minister David Cameron, Scottish first minister Nicola Sturgeon asked for clarification, arguing “the confidentiality of communications between parliamentarians and their constituents is of the utmost importance,” according to The Guardian.

MPs not ‘above the law’

Many of the lawmakers on Monday argued that the need to protect their communications from surveillance was to protect whistleblowers, and not about driving a wedge of privilege between them and the public.

David Davis MP, a Conservative politician known for being pro-civil liberties, and who has almost always voted against requiring the mass retention of information about communications, said MPs need the doctrine’s protections against government surveillance because their job is to “hold the government to account.”

He argued that MPs often “deal with campaigners, journalists, whistleblowers, and our own constituents” in bringing to light wrongdoing disclosed by members of the public, including police and public-sector workers, and employees of big corporations.

Chris Bryant MP, who called for the emergency debate following last week’s ruling, argued that MPs “cannot ever be above the law,” a sentiment echoed by others, including the home secretary.

Bryant, a Labour MP with a long record of voting in favor of data retention and communications collection legislation, accused May of withholding any public statement about a change in the doctrine’s standing because it wasn’t “compatible” with the current state of national security.

Davis, in agreement with Lucas and others, said the the doctrine must be enshrined into law.

May will “soon” introduce the so-called “snoopers’ charter,” first mentioned earlier this year in the Queen’s annual speech.

Known as the Investigatory Powers Bill, the Conservative government said the draft law would give authorities “tools” to keep the public safe by addressing gaps in existing intelligence gathering.

Dominic Grieve MP, chair of the Security and Intelligence Committee which oversees the intelligence agencies, said the committee will examine how parliamentarians will be treated under the new draft bill.

ooo

Henry Sapiecha