Category Archives: LAWS

Government’s plan to spy on all Australians exposed in leaked letters

It may shortly be far easier for government spies to access your private data. Photo source: Pixabay

We’re constantly being advised to protect our data and information online, but it turns out there may be even a greater threat & cause for concern.

An exclusive report by The Sunday Telegraph reveals our online data may not even be safe from the Australian Government. Australian citizens may soon be subjected to secret digital monitoring by the top cyber spy agency in the country with no warrant rerquired for accessing all your info when they feel like it.

This means everything from text messages to emails and even bank statements could be accessed in secret under the radical new proposed plan. The Sunday Telegraph viewed the secret letters between the heads of Department of Home Affairs and Defence. The letters detail possible new powers for the Australian Signals Directorate (ASD).

As the current rules stand, intelligence is not to be produced on Australian citizens. Having said that, the Australian Federal Police and domestic spy agency ASIO can investigate people with a warrant and also seek help from the ASD if needed in what are deemed to be extreme cases.

If the proposal is passed, it would be up to Defence Minister Marise Payne and Home Affairs Minister Peter Dutton to allow spying to occur. Furthermore, they could approve cases without Australia’s top law officers being aware of it.

The Sunday Telegraph believes Dutton hasn’t yet presented Payne with any formal proposals for changes to the legislation. If passed though, spies would be given permission to secretly access information relating to an Australian citizens’ financial data, health information and phone records. A change in law would mean it’s also illegal for government agencies and private businesses to hold back any information that could hinder the security measures.

The Sunday Telegraph believes the reason for the data crackdown would be to stop terrorism, child exploitation and other serious crimes being conducted both here in Australia and overseas.

Several times in recent months online data and its safety has made headlines. Earlier this year, Facebook came under fire for breaching privacy data rules. As it stands, anything you share or access online remains there, even if you delete it.

This means any photos, emails, website history, online comments and videos you upload or view are stored away somewhere in cyberspace. Worryingly, any information shared on a social media platform such as Facebook will remain with the company, even if your profile is deleted.

What are your thoughts? Have you concerns that your private information could be secretly accessed by spies and the government? Do you think it’s really to protect Australians, or just another feeble excuse for the government to gain more information about us? Big brother is going too far this time one would think. Write to your MP.

Henry Sapiecha

Notifiable Data Breaches initiative: Preparing to disclose a data breach in Australia

Australia’s Notifiable Data Breaches scheme will come into force next month. Here is what it means and how it will affect organisations, and individuals, in Australia.

WHAT IS THE NOTIFIABLE DATA BREACHES SCHEME?

Australia’s Notifiable Data Breaches (NDB) scheme comes into effect on February 22, 2018, and as the legislative direction is aimed at protecting the individual, there’s a lot of responsibility on each organisation to secure the data it holds.

The NDB scheme falls under Part IIIC of the Australian Privacy Act 1988 and establishes requirements for entities in responding to data breaches.

What that means is all agencies and organisations in Australia that are covered by the Privacy Act will be required to notify individuals whose personal information is involved in a data breach that is likely to result in “serious harm”, as soon as practicable after becoming aware of a breach.

Tax file number (TFN) recipients, to the extent that TFN information is involved in a data breach, must also comply with the NDB.

In addition to notifying individuals affected, under the scheme, organisations must provide advices on how those affected should respond, as well as what to do now their information is in the wild. The Australian Information Commissioner, currently Timothy Pilgrim, must also be notified of the breach.

“The NDB scheme formalises an existing community expectation for transparency when a data breach occurs,” Pilgrim told ZDNet. “Notification provides individuals with an opportunity to take steps to protect their personal information, and to minimise their risk of experiencing harm.”

Intelligence agencies, not-for-profit organisations or small businesses with turnover of less than AU$3 million annually, credit reporting bodies, health service providers, and political parties are exempt from the NDB.

Read more: Former ASIO head questions why political parties are exempt from breach disclosure

WHAT CONSTITUTES A DATA BREACH?

In general terms, an eligible data breach refers to the unauthorised access, loss, or disclosure of personal information that could cause serious harm to the individual whose personal information has been compromised.

Examples of a data breach include when a device containing customers’ personal information is lost or stolen, a database containing personal information is hacked, or personal information is mistakenly provided to the wrong person.

An employee browsing sensitive customer records without any legitimate purpose could constitute a data breach as they do not have authorised access to the information in question.

The NDB scheme uses the phrase “eligible data breaches” to specify that not all breaches require reporting. An example of this is where Commonwealth law prohibits or regulates the use or disclosure of information.

An enforcement body — such as the Australian Federal Police (AFP), the police force or service of a state or a territory, the Australian Crime Commission, and the Australian Securities and Investments Commission — does not need to notify individuals about an eligible data breach if its CEO believes on reasonable grounds that notifying individuals would be likely to prejudice an enforcement-related activity conducted by, or on behalf of, the enforcement body.

Although not required all the time to disclose a breach, a spokesperson for the AFP told ZDNet the AFP would be complying with its notification obligations in all circumstances where there are no relevant exemptions under the Act.

See also: Privacy Commissioner to probe Australian government agencies on compliance

If the Australian Information Commissioner rules the breach is not bound by the NDB scheme, organisations may not have to disclose it any further.

In addition, data breaches that are notified under s75 of the My Health Records Act 2012 do not need to be notified under the NDB scheme as they have their own binding process to follow, which also lies under the umbrella of the OAIC.

Read more: OAIC received 114 voluntary data breach notifications in 2016-17

DETERMINING SERIOUS HARM

As the NDB dictates an objective benchmark in that the scheme requires a “reasonable person” to conclude that the access or disclosure is “likely to result in serious harm”, Melissa Fai, special counsel at Gilbert + Tobin, told ZDNet that in assessing the breach, an organisation should interpret the term “likely” to mean more probable than not — as opposed to merely possible.

“Serious harm” is not defined in the Privacy Act; but in the context of a data breach, serious harm to an individual may include serious physical, psychological, emotional, financial, or reputational harm.

Information about an individual’s health; documents commonly used for identity fraud including a Medicare card, driver’s licence, and passport details; financial information; and a combination of types of personal information — rather than a single piece of personal information — that allows more to be known about an individuals can cause serious harm.

In assessing the risk of serious harm, entities should consider the broad range of potential kinds of harm that may follow a data breach.

THE NOTIFICATION PROCESS

Agencies and organisations that suspect an eligible data breach may have occurred must undertake a “reasonable and expeditious assessment” based on the above guidelines to determine if the data breach is likely to result in serious harm to any individual affected.

If an entity is aware of reasonable grounds to believe that there has been an eligible data breach, it must promptly notify individuals at risk of serious harm and the commissioner about the breach.

Organisations disclosing a breach must complete the Notifiable Data Breach statement — Form which can be found here.

The notification to affected individuals and the commissioner must include the following information: The identity and contact details of the organisation, a description of the data breach, the kinds of information concerned, and recommendations about the steps individuals should take in response to the data breach.

Those affected are to be notified within 30 days of the breach’s discovery, during which time the entity can conduct its own investigation on the breach. 30 days is the absolute maximum.

The NDB scheme, however, provides entities with the opportunity to take steps to address a data breach in a timely manner, and avoid the need to further notify — including notifying individuals whose data has been somewhat exposed.

See also: Privacy Commissioner finds Australia more confident in reporting breaches to police

FAILING TO DISCLOSE A BREACH

Failure to comply with the NDB scheme will be “deemed to be an interference with the privacy of an individual” and there will be consequences.

Gilbert + Tobin’s Fai explained that if an organisation is found to have hidden an eligible data breach, or is otherwise found to have failed to report an eligible data breach, such failure will be considered an interference with the privacy of an individual affected by the eligible data breach, and serious or repeated interferences with the privacy of an individual can give rise to civil penalties under the Privacy Act.

If the data breach that the organisation has failed to report is serious, or if the organisation has failed to report an eligible data breach on two or more separate occasions, Fai explained the OAIC has the ability to seek a civil penalty order against the organisation of up to AU$2.1 million, depending on the significance and likely harm that may result from the data breach.

“Of course, an organisation must also consider the risk of reputational damage to its brand and the commercial damage that might flow from that, particularly given the growing importance to an organisation’s bottom line of consumer trust in an organisation’s data management policies and processes and its ability to respond quickly, effectively, and with integrity to data breaches,” Fai added.

“The effects of the data breach on Equifax last year and its response are a case in point.”

See also: Massive Equifax data breach exposes as many as 143 million customers

THE ROLE OF THE INFORMATION COMMISSIONER AND THE OAIC

The commissioner has a number of roles under the NDB scheme, which includes receiving notifications of eligible data breaches; encouraging compliance with the scheme, including by handling complaints, conducting investigations, and taking other regulatory action in response to instances of non-compliance; and offering advice and guidance to regulated organisations, and providing information to the community about the operation of the scheme.

The OAIC has published guidelines on the scheme, which also includes information on how to deal with the aftermath of a breach.

HOW DID THE NDB COME ABOUT?

The federal government finally passed the data breach notification laws at its third attempt in February 2017.

A data breach notification scheme was recommended by the Joint Parliamentary Committee on Intelligence and Security in February 2015, prior to Australia’s mandatory data-retention laws being implemented.

HOW TO GET READY

According to Gilbert + Tobin, organisations should be at the very least getting familiar with what data they have, where it is kept, and who has access to it.

Read more: NetApp warns privacy is not synonymous with security

Assessing existing data privacy and security policies and procedures to make sure organisations are in a position to respond appropriately and quickly in the event of a data breach is also important.

“This should include a data breach response plan which works across diverse stakeholders in an organisation and quickly brings the right people — such as from IT, legal, cybersecurity, public relations, management, and HR — together to respond effectively,” Fai told ZDNet.

It wouldn’t hurt to continuously audit and strengthen cybersecurity strategies, protection, and tools to avoid and prevent data breaches.

“It is also important that an organisation’s personnel are aware of the NDB scheme. Personnel need appropriate training, including to identify when an eligible data breach may have occurred and how to follow an entity’s policies and procedures on what to do next,” Fai explained, adding this also extends to suppliers and other third-parties that process personal information on their behalf.

DOES YOUR BUSINESS HAVE A EUROPEAN CONNECTION?

From May this year, the General Data Protection Regulation (GDPR) will come into play, requiring organisations around the world that hold data belonging to individuals from within the European Union (EU) to provide a high level of protection and explicitly know where every ounce of data is stored.

Organisations that fail to comply with the regulation requirements could be slapped with administrative fines up to €20 million, or in the case of an undertaking, up to 4 percent of the total worldwide annual turnover of the preceding financial year, whichever is higher.

The laws do not stop at European boundaries, however, with those in the rest of the world, including Australia, bound by the GDPR requirements if they have an establishment in the EU, if they offer goods and services in the EU, or if they monitor the behaviour of individuals in the EU.

See more: How Europe’s GDPR will affect Australian organisations

The GDPR and the Australian Privacy Act share many common requirements, but there are a bunch of differences, with one crucial element being the time to disclose a breach.

Under the NDB scheme, organisations have a maximum of 30 days to declare the breach; under the GDPR, organisations have 72 hours to notify authorities after having become aware of it, unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons.

“In sum, if an Australian organisation is subject to the GDPR regime when it comes into effect in May this year, it needs to comply with its obligations under both regimes — although the two regimes contain different requirements, they are not mutually exclusive,” Fai added. “However, when it comes to data breaches, the high watermark of compliance is complying with the European regime.”

Read also: What is GDPR? Everything you need to know about the new general data protection regulations

HOW TO PREVENT A DATA BREACH

Any organisation that has purchased a security solution from a vendor knows that there is no silver bullet to completely secure an organisation.

“When it comes to data breaches, everybody is looking for something, a product, a process, a standard to prevent them completely. Unfortunately, this isn’t possible,” Symantec CTO for Australia, New Zealand, and Japan Nick Savvides told ZDNet.

“The first thing any organisation should do is understand that data breaches are not always preventable but they are mitigatable. Whether the data breach is a result of a compromise, malicious insider, or even a well-meaning insider accidentally leaking information, mitigations exist.”

Breaking the mitigations into three parts, Savvides said the first is dealing with a malicious attacker, the second is having information-centric security which he said applies to all scenarios, and the third mitigation category is the response plan.

“Most organisations don’t have very effective response plans for a data breach event. They might have a plan, but from what has been seen, the plans are generally very academic in nature rather than practical and often get bypassed in the case of a real event,” he explained.

“Organisations need to have processes for having incidents reported, a clear plan on who to involve, what process to follow, and a clear PR message.

Savvides said it is clear that users value transparency and clear speech rather than ambiguous legalese responses some organisations have produced.

“The commencement of the scheme is also a timely opportunity for organisations to take stock of the personal information they collect and hold, and how it is managed,” Pilgrim added. “By ensuring personal information is secured and managed appropriately, organisations can reduce the likelihood of a data breach occurring in the first place.”

PREVIOUS DATA BREACHES IN AUSTRALIA

Henry Sapiecha

Amazon gives record amount of client data to US law enforcement

The company’s fifth transparency report reveals more customer data was handed to US law enforcement in the first-half of last year than ever before.

Law enforcement requests for Amazon’s cloud customers has gone up, but the company still won’t say if Echo has been wiretapped. (Image: CNET/CBS Interactive)

Amazon has turned over a record amount of customer data to the US government in the first-half of last year in response to demands by law enforcement.

The retail and cloud giant quietly posted its latest transparency report on Dec. 29 without notice — as it has with previous reports — detailing the latest figures for the first six months of 2017.

The report, which focuses solely on its Amazon Web Services cloud business, revealed 1,936 different requests between January and June 2017, a rise from the previous bi-annual report.

The company received:

  • 1,618 subpoenas, of which the company fully complied with 42 percent;
  • 229 search warrants, of which the company fully complied with 44 percent;
  • 89 other court orders, of which the company fully complied with 52 percent.

It’s not clear why there was a spike in requests during the half-year period. An Amazon spokesperson declined to comment.

Amazon also confirmed it had 75 requests from outside the US through a mutual legal assistance process, in which it partially complied with two cases. The remaining cases were rejected. But the company didn’t say which countries made the requests.

Amazon said it did not receive any content removal orders during the period.

As in previous reports, the company refused to say if it had received a national security brief during the period. Tech companies are barred from disclosing exactly how many of these letters they receive, but companies can under their First Amendment right to freedom of speech say if they have not received one.

Amazon instead preferred to say it had received between zero and 249 national security requests.

The company’s transparency reports do not take into account any other data-related business units, such as if authorities have obtained data wiretapped or submitted through its Amazon’s Echo products.

Law enforcement has, since Echo’s inception, looked at ways to obtain data from the voice-activated assistant. Amazon has largely resisted efforts by police to obtain data from the always-listening product, but acquiesced in one homicide investigation after the suspect did not object to the turning over of his Echo data.

Henry Sapiecha

Australia likely to get its own GDPR

Everyone in the Australian cybersecurity ecosystem has a role to play to ensure the security of the nation, according to Nationals Senator Bridget McKenzie.

The mandatory data breach notifications laws coming into effect in Australia next year will be followed by other laws to ensure everyone in the digital ecosystem — including government divisions, large corporates, small to medium-size enterprises (SMEs), and consumers — are playing their role in keeping Australia “cyber secure”, according to Senator Bridget McKenzie.

McKenzie, who is the chair of the Foreign Affairs, Defence, and Trade Legislation Committee, likened cyber breaches to the “system of disease in the pre-industrial revolution that just swept through”.

“Cyber breaches have the capacity to wipe out industries, wipe out systems, wipe out communities, if every member of that community or that cyber ecosystem isn’t following best practice when it comes to keeping their information secure,” McKenzie told ZDNet at the Australian Computer Society’s Reimagination Thought Leaders’ Summit.

“It’s not just defence’s job or ASIO’s or DSTO’s or the government’s indeed, but every SME and private homeowner needs to have an eye for cybersecurity, making sure their data’s safe.”

McKenzie said mandatory data breach notifications laws, set to come into effect next year, is a step towards keeping organisations alert and accountable, with other laws expected to be introduced in Australia in the upcoming years, possibly similar to those coming into effect next year in the European Union.

The European Union’s (EU) General Data Protection Regulation (GDPR) will require organisations around the world that hold data belonging to individuals from within the EU to provide a high level of protection and explicitly know where every piece of data is stored.

Organisations that fail to comply with the regulation requirements could be fined up to €20 million, or, in the case of an undertaking, up to 4 percent of the total worldwide annual turnover of the preceding financial year — whichever is higher.

“No longer can you say, ‘Oh I’ll leave it to someone else because the flow-on effects, the interconnectedness, the Internet of Things, is such that if one member of that web, if you like, has a security breach, it has flow-on effects for everybody involved,” McKenzie said.

Additionally, Australians need to have the confidence that they can share private information such as their health details and not have it end up in the public sphere, otherwise the nation will not be able to experience the full benefits of technology, McKenzie said.

Shadow Minister for the Digital Economy Ed Husic said, however, that the government has a long way to go in building that confidence, given 50,000 Australians have been affected by a government data breach that occurred in October. He noted that the breach was not a technological error, but a human error.

“How do we build consumer or citizen confidence about protection of privacy?” Husic said. “50,000 people were affected by a data breach across government, releasing details of passwords and credit cards. It’s not all tech related … people often blame tech for this. It’s people and the way that they use data and it’ll be interesting to see the details that come out on this in the next few days.”

“This data breach occurred back in October, no public explanation of it, no detail about what was known, what was being done to fix it. If we want people to be confident that data is being used well by government, then the government’s got a long way to go to build that confidence.”

Husic added that the government needs to lead by example; it should be notifying the public about data breaches if it wants businesses to do the same.

“[The government’s] got to do some things itself. And you can’t lecture business about getting focused on cybersecurity if you’re losing your own moral authority … because you’re not looking after data within your own batch,” he said.

McKenzie believes in Australia’s growing status as a cybersecurity hub, saying that the nation is equipped with the right expertise in this area. She added that Australia is in the process of creating a strong cybersecurity industry capable of exporting.

“Our law enforcement and intelligence agencies are world-class. We’re also part of Five Eyes, which means we have a lot of access to information and technology and collaboration opportunities,” she said. “We lead the world in quantum computing … and it [has the] potential to contribute further to security of data and security of communications particularly in the intelligence and defence spheres.

“We’ve really got some technical expertise, but also I think a richness around governance frameworks and excellence in regulatory frameworks that can also assist other governments and other organisations worldwide to understand best practices in the area.”

In September, Ambassador for Cyber Affairs Dr Tobias Feakin communicated a similar sentiment, saying Australia has an international standing in cybersecurity, and brings “key qualities” to the table.

Australia has also played a role in the creation of international peacetime norms for cyberspace, including chairing the first United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (UN GGE) in 2013, and helping develop the 11 international norms agreed to in subsequent UN GGE meetings.

“We have regional knowledge beyond most. We have a trusted diplomatic brand, and that’s something that we intend to capitalise on. We have strategic and economic interests in the region. And we have long-standing development partnerships across the region already,” Feakin said at the second annual SINET61 conference in Sydney.

“We need to capitalise on those, make the most of them. Not just for us as a government, [and] for regional partners as well, but also for our private sector … We see this issue as central to our economic future,” he said.

“It’s only this year that it’s just reached the point, of tipping over, to 50 percent of all internet users living in the Asia-Pacific. But really, still, there’s huge economic growth to unravel there, because still 60 percent of all households don’t have internet coverage.”

Last month, launching the International Cyber Engagement Strategy, Foreign Minister Julie Bishop said that for the purpose of national security, cyberspace cannot be an ungoverned space.

“Just as we have international rules that guide how states behave, and how states should behave towards each other, the international rules-based order that’s been in place for about 70 years, so too must states acknowledge that activities in cyberspace are governed by the same set of rules as military and security activities in traditional domains,” Bishop said in October.

“The 2016 US presidential election focused the world’s attention on the potential for cyber operations to interfere with democratic processes. This cannot be allowed to continue. It strikes at the very heart of the sovereignty of nations.”

According to the International Cyber Engagement Strategy, Australia will develop an international “architecture for cooperation” including mechanisms to respond to unacceptable behaviour in cyberspace in a timely manner.

“Australia’s responses to malicious cyber activity could comprise law enforcement or diplomatic, economic, or military measures as appropriate for the circumstances. This could include, but is not restricted to, offensive cyber capabilities that disrupt, deny, or degrade the computers or computer networks of adversaries,” the strategy states.

The strategy also implies that the nation has the capability to identify the source of cyber attacks.

“Depending on the seriousness and nature of an incident, Australia has the capability to attribute malicious cyber activity in a timely manner to several levels of granularity — ranging from the broad category of adversary through to specific states and individuals,” the strategy states.

In September, the federal government pledged AU$50 million over seven years for the cybersecurity cooperative research centre (CRC), with over AU$89 million in further funding to come from 25 industry, research, and government partners.

The cybersecurity CRC will deliver solutions to increase the security of critical infrastructure, the government said at the time, which includes “frameworks, products, and approaches that will service existing and future ICT enterprises across a broad range of platforms and operating systems”.

Assistant Minister for Industry, Innovation and Science Craig Laundy said the activities of the cybersecurity CRC will contribute to the objectives laid out in Australia’s AU$240 million Cyber Security Strategy, which is aimed at defending the nation’s cyber networks from organised criminals and state-sponsored attackers.

Related Coverage

Just one day after its release, iOS 11.1 hacked by security researchers

The bugs were found in Apple’s Safari web browser.

With a physical key, Google says it can protect you from nation-state hackers

When two-factor doesn’t cut it against the most sophisticated adversary, Google thinks it has an answer.

IoT security: Keeping users on their toes means staying on yours

IoT has introduced new vulnerabilities that can put your network at risk. Providing users with ongoing security training — and examples that relate to their work — will help keep your data safe.

Hacking group targets banks with stealthy trojan malware campaign

Stolen credentials are used to launch attacks which include the ability to stream live video of the screens of infected users.

This destructive wiper ransomware was used to hide a stealthy hacking campaign

“ONI” ransomware deployed on hundreds of machines in an effort by attackers to cover tracks of “Night of the Devil” campaign — which exploited leaked-NSA exploits.

www.scamsfakes.com

www.crimefiles.net

Henry Sapiecha

New USA Federal Requirements On Cellphone Surveillance

WASHINGTON (AP) — Federal law enforcement officials will be routinely required to get a search warrant before using secretive and intrusive cellphone-tracking technology under a new Justice Department policy announced Thursday.

The policy represents the first effort to create a uniform legal standard for federal authorities using equipment known as cell-site simulators, which tracks cellphones used by suspects.

It comes amid concerns from privacy groups and lawmakers that the technology, which is now widely used by local police departments, is infringing on privacy rights and is being used without proper accountability.

“The policy is really designed to address our practices, and to really try to promote transparency and consistency and accountability — all while being mindful of the public’s privacy interest,” Deputy Attorney General Sally Yates told reporters in announcing the policy change.

The policy applies only to federal agencies within the Justice Department and not, as some privacy advocates had hoped, to state and local law enforcement whose use of the equipment has stirred particular concern and scrutiny from local judges.

The technology — also known as a Stingray, a suitcase-sized device — can sweep up basic cellphone data from a neighborhood by tricking phones in the area to believe that it’s a cell tower, allowing it to identify unique subscriber numbers. The data is then transmitted to the police, helping them determine the location of a phone without the user even making a call or sending a text message.

The equipment used by the Justice Department does not collect the content of communications.

Even as federal law enforcement officials tout the technology as a vital tool to catch fugitives and kidnapping suspects, privacy groups have raised alarms about the secrecy surrounding its use and the collection of cellphone information of innocent bystanders who happen to be in a particular neighborhood or location.

In creating the new policy the Justice Department was mindful of those concerns and also sought to address inconsistent practices among different federal agencies and offices, Yates said.

“We understand that people have a concern about their private information, and particularly folks who are not the subjects or targets of investigations,” Yates said.

The new policy requires a warrant in most cases, except for emergencies like an immediate national security threat, as well as unspecified “exceptional circumstances.” The warrant applications are to set out how the technology will be used.

In addition, authorities will be required to delete data that’s been collected once they have the information they need, and are expected to provide training to employees.

The policy could act as a blueprint for state and local law enforcement agencies in developing their own regulations. But it’s unclear how broad an impact Thursday’s announcement will have, since it does not directly affect local police agencies unless they’re working alongside federal authorities on a case or relying on their assistance.

Use of the technology has spread widely among local police departments, who have been largely mum about their use of the technology and hesitant to disclose details — often withholding materials or heavily censoring documents that they do provide.

Local departments have faced scrutiny from judges about how they deploy the equipment, though agencies have often insisted that non-disclosure agreements with the FBI limit what they can say.

The FBI has said that while specific capabilities of the equipment are considered sensitive, it did not intend for the agreements to prevent the police from disclosing to a court that the equipment was used in a particular case. Yates said she expected the FBI to revise any such agreements to be more transparent.

The American Civil Liberties Union called the policy a good first step, but expressed disappointment that it did not cover federal agencies outside the Justice Department or local police who use federal funds to purchase the surveillance equipment. It called on the Justice Department to close remaining loopholes, such as the one allowing for warrantless surveillance under undefined “exceptional circumstances.”

“After decades of secrecy in which the government hid this surveillance technology from courts, defense lawyers, and the American public, we are happy to see that the Justice Department is now willing to openly discuss its policies,” ACLU lawyer Nathan Freed Wessler said in a statement.

Nate Cardozo, a staff attorney with the Electronic Frontier Foundation, a privacy group, praised the policy as an important step, though he said he suspected Justice Department attorneys saw “the writing on the wall” and recognized that judges would increasingly begin requiring warrants.

Though the policy does not require local police to follow the lead of federal agencies, “this is going to let the air out of state law enforcement’s argument that a warrant shouldn’t be required.”

“We think that given the power of cell-site simulators and the sort of information that they can collect — not just from the target but from every innocent cellphone user in the area — a warrant based on probable cause is required by the Fourth Amendment,” Cardozo said.

Henry Sapiecha

Twitter abandons ‘Do Not Track’ privacy protection

Is this the end for ‘Do Not Track’, the web-tracking privacy service?

The most shocking internet privacy laws.

Twitter was one of the first companies to support Do Not Track (DNT), the website privacy policy. Now, Twitter is abandoning DNT and its mission to protect people from being tracked as they wander over the web

DNT seemed like a good idea. By setting DNT on in your web browser, websites that supported DNT could neither place nor read advertising cookies on your device. Well, that was the idea anyway.

Any web browser or application that supported DNT added a small snippet of code to its request for a web page: DNT=1. This meant websites and services that observed DNT shouldn’t track you on the internet.

This would protect your online privacy. You might think that meant “Don’t collect and store any information about me without my explicit permission.”

Wrong.

From day one in 2012, that isn’t how it worked. According to Sarah Downey, an attorney and privacy advocate, the Interactive Advertising Bureau and the Digital Advertising Alliance (DAA), which represent most online advertisers, have their own interpretation of Do Not Track: “They have said they will stop serving targeted ads but will still collect and store and monetize data.”

However, Twitter played fair by the spirit of DNT rather than the law. Unfortunately, they were one of the few companies that did. DAA, for example, publicly abandoned DNT in 2013. With the advertisers and privacy advocates unable to agree on basic principles, DNT increasingly offered users no privacy protection worth the name.

Twitter finally had enough of fighting an already lost battle. In a note to its revised privacy policy, the company stated: “Twitter has discontinued support of the Do Not Track browser preference. While we had hoped that our support for Do Not Track would spur industry adoption, an industry-standard approach to Do Not Track did not materialize. We now offer more granular privacy controls.”

Under its new privacy rules, Twitter is extending how long its tracking cookies are active, from 10 days to 30 days as of June 18. You can also switch off Twitter ad personalization. From the same page, you can also disable geolocation and data sharing with third parties.

It’s a pity DNT has come to this. As Jason Kint, CEO of Digital Content Next, pointed out in an email interview: “Do Not Track still remains an elegant and simple consumer signal to not be tracked across the broader web.”

Kint remains hopeful about DNT: “Twitter dropping its support is disappointing as they were a leader here, but the standard is written regardless of what Twitter says and will continue to move forward. In the desire to regain consumer trust and reduce ad blocking, the ad tech world would be wise to embrace Do Not Track rather than ignoring it. Ultimately consumers win. No business has ever succeeded long-term without meeting consumer demands.”

I’m not at all optimistic. DNT has been spinning its wheels for years now with little progress. Online privacy remains an issue that upsets people, but at day’s end, neither companies nor the Trump administration have any real interest in protecting privacy.

Henry Sapiecha

Lawyers and insurers set for data breach payday

Soon, in Australia, Europe, and the UK, organisations that suffer a data breach must be able to show that they’d taken reasonable steps to prevent it. Time is running out.

Australia’s mandatory data breach notification laws come into force in February 2018. Europe’s General Data Protection Regulation (GDPR), which also requires breach notification, becomes law in May 2018. Brexit or not, the UK will also have to comply.

“[GSPR] will continue to apply to all businesses exporting goods or services into the European Single Market, regardless of any future legal and regulatory settlement reached by the UK with the EU,” wrote Peter Wright, managing director of DigitalLawUK, and chair of the UK Law Society’s Technology and Law Reference Group.

So where are we all up to here?

My reading is that we only have hints as to what’s required, and that we won’t really know until the lawyers get to work.

Wright has some security advice for UK law firms that really should be standard practice everywhere.

“Make sure that whatever medium you are using to either store or transmit personal data — in particular, data relating to your clients — is secure and encrypted,” he wrote.

Wright warns against “free cloud-based systems like Dropbox or Google Drive to communicate with clients or receive confidential data” because they’re not encrypted, but that’s no longer the case. Both Dropbox and Google Drive now encrypt customer data at rest, as does Apple’s iCloud.

But Wright’s general point about using unencrypted file sharing services stands. “You are effectively in legal and regulatory breach by using them for client-related activity as their servers are based in the cloud and most likely in the United States,” he wrote. And of course encrypted file storage is irrelevant if a user’s credentials are compromised through a phish.

Wright’s final observation is, to my mind, the most frightening.

“If firms have already not begun work on achieving compliance with the GPDR, they will find it impossible to achieve full compliance by May 2018. At this point, it’s a matter of working out how uncompliant you wish to be. You will have to cherry pick what you can and cannot afford to comply with, and put the rest in place as quickly as possible,” he wrote.

UK and European organisations still have an entire year to get compliant with their new laws. Australian organisations, somewhat less, although it could be argued that compliance with Australia’s laws would be easier. But is that really the case?

Australia’s Privacy Act says that the steps taken to protect personal information must be “reasonable in the circumstances”, but there haven’t been enough real-world cases to understand what that might mean.

Well how about the standard set by the Australian Signals Directorate (ASD) with its Essential Eight Strategies to Mitigate Cyber Security Incidents, released in February?

“The eight mitigation strategies with an ‘essential’ effectiveness rating are so effective at mitigating targeted cyber intrusions and ransomware, that ASD considers them to be the cyber security baseline for all organisations,” the ASD wrote.

The Essential Eight includes measures that we know many organisations don’t implement: application whitelisting; getting rid of Adobe Flash; installing ad blockers; disabling untrusted Microsoft Office macros; multi-factor authentication; or even securely-stored daily backups.

If experts like the ASD consider all these to be “baseline”, wouldn’t a lawyer argue that failing to implement the Essential Eight is failing to take “reasonable steps”? I guess it depends on “the circumstances”, right?

I’ve previously written that once we seen the first data breaches being disclosed, the lawyers will follow. What I didn’t consider was the insurance industry.

Cyber insurance is already the fastest-growing sector of the insurance market, according to Nick Abrahams, a partner with law firm Norton Rose Fulbright, and their APAC technology practice leader. Counter-intuitively, better insurance cover means that the lawyers are far more likely to swoop in for the kill.

“We know that the class-action law firms are looking at cyber as their next big opportunity,” Abrahams told the the InnovationAus.com conference Cyber Security — the Leadership Imperative 2017 in Sydney last week.

“If there’s 100,000 people impacted [by a data breach], or a million people, and they can all be awarded $1000 or $2000, that’s a class action,” he said.”

“The US has a massive amount of class actions in relation to privacy breaches, and the reason those class actions occur is because people know that there is insurance there to back it up,” Abrahams said. He expects a “steep rise” in litigation.

While it’s fast-growing, the cyber insurance industry is “quite immature”, especially in Australia, and “all the policies are completely different”, according to Andrew Bycroft, chief executive officer of The Security Artist.

“It’s not even like comparing apples and oranges, it’s like comparing apples and dogs,” Bycroft told the same conference.

“A lot of the insurers are actually taking on a lot of unnecessary risk. For example, they wouldn’t provide home and contents insurance for people who have houses with no doors, but what I’ve seen them doing is actually offering policies to organisations which are pretty poor in terms of their resilience capabilities.”

Bycroft says that insurers might want to work with potential customers to improve their security posture before selling them insurance.

Craig Davies, chief executive officer of the new Australian Cyber Security Growth Network (ACSGN), wasn’t exactly thrilled with that suggestion.

“A marketplace driven by insurers can only be fantastic,” Davies told the conference, to nervous audience laughter. “If I had no ethics I’d certainly invest in buying insurance and selling insurance for cyber right now. You could make a fortune.”

Yes, there’s plenty of money to be made, by insurers, by lawyers, and by the cybersecurity industry that cleans up the mess. Or, ideally, fixes things before there’s a mess to clean up.

Somewhere in there, we might even manage to better protect people’s personal information.

Henry Sapiecha

Police illegally accessed journalist’s phone files under new metadata retention regime

The Australian Federal Police illegally obtained a journalist’s phone records under the Turnbull government’s new metadata retention regime, the agency announced on Friday.

The breach took place as part of an investigation into a leak of confidential police material – and the incident will now be investigated by the Commonwealth Ombudsman.

AFP commissioner Andrew Colvin said the police officers investigating the leak did not realise they were required to obtain a warrant to access the journalist’s metadata.

“This was human error. It should not have occurred. The AFP takes it very seriously and we take full responsibility for breaching the Act,” Mr Colvin said.

“There was no ill will or malice or bad intent by the officers involved who breached the Act. But simply it was a mistake.”

The journalist in question had not been informed their data had been accessed, Mr Colvin said, due to sensitivities around the ongoing investigation into the leak.

The breach occurred “earlier this year” and was reported to the Ombudsman on Wednesday.

Under the revised data retention regime, police are required to obtain a warrant from a judge to seek metadata from a journalist.

“The vulnerability is the investigator needs to understand that that’s their requirement,” Mr Colvin said on Friday. “On this occasion, the investigator didn’t.”

The phone records in question were relevant to the investigation, Mr Colvin said, but “what was improper was that the right steps weren’t taken to gain access to it”.

The breach is the first such incident that has come to light under the government’s new metadata retention regime, which requires service providers to store their customers’ data for two years.

Acknowledging the policy was “controversial”, Mr Colvin said Australians should nonetheless have “full confidence” in both the police and the policy.

He conceded the AFP’s internal procedures had not anticipated and prevented the error and therefore those practices would be subject to “significant changes”.

Access to metadata would now be restricted to more senior officers, he said, and the number of officers who can approve access to metadata will be reduced. Training will also be bolstered.

Asked if the unlawfully-obtained phone records would still be relied on to inform the actions of investigators, he acknowledged that once seen it could not be unseen.

“Clearly they can’t unsee it. They’ll need to consider … what weight they put on what they saw,” Mr Colvin said. “But that material was accessed illegally, so it can have no bearing on the conduct of the investigation.”

He stressed the content of the journalist’s phone calls were not accessed, just the call records. But Paul Murphy, chief executive of the Media, Entertainment and Arts Alliance, said that was not a mitigating factor.

“It’s another demonstration that the AFP do not understand the sensitivities here, the vital importance of protecting journalists’ confidential sources,” he said. “It’s an absolute disgrace.”

South Australian senator Nick Xenophon, who lobbied for extra safeguards for journalists when the laws were formulated, said he was “furious” about the revelation and would seek further amendments to the law.

“This is outrageous. There’s been a flagrant breach of the law here,” he said. “The safeguards have been completely trashed. This should chill the spine of every journalist in this country.”

www.policesearch.net

www.ispysite.com

www.scamsfakes.com

Henry Sapiecha

This Algorithm & Robots Decides Crime Cases Almost As Well As A Judge

A Robotic computer program could help relieve the massive backlogs facing the world’s highest courts

justice-scales-gif image www.crimefiles.net

A computer algorithm took on the work of real human judges and did a pretty good job, predicting the decisions of one of Europe’s highest courts with 79 percent accuracy. The finding suggests artificial intelligence could help the world’s busiest courts work through their massive backlog of cases, even if an algorithm isn’t about to take up a digital gown and gavel and start actually deciding cases.

The AI analyzed cases tried before the European Court of Human Rights, which hears cases from people and groups who claim their civil or political rights have been violated in their home countries. An international team of computer scientists worked with a legal scholar to determine just how well AI could predict the court’s ultimate judgement based on how the written decision described the factual background of the case and the arguments of the parties involved. They found it agreed with the judges’ decision four of five times — and that the underlying facts of the case were by far the best predictor of the outcome of a case, rather than any of the more abstract legal arguments.

“The fact that we can get this accuracy, it means that there are some consistent patterns of violations that lead to overturning the [previous court’s] decision,” University of Pennsylvania computer scientist Daniel Preoţiuc-Pietro told Vocativ.

That suggests the court is typically less concerned with parsing philosophical questions of whether a specific instance is a human rights violation than it is determining how that situation fits into their already defined categories of violations. Preoţiuc-Pietro pointed to the example of people who allege mistreatment in prison as a situation that typically led to decisions in those people’s favor. “That’s definitely more likely for the court to actually accept that the state made a mistake and the people involved were actually justified,” he said.

More U.S. Military Wants Robots That Can Explain Themselves

The AI used what’s known as natural language processing to analyze the cases. This particular method involved looking at the text of a decision as a big bag of words, not worrying about any particular word order or grammar. Instead, the AI looked at what individual words and combinations of two, three, or four words appeared most frequently in the text, regardless of order. The AI then looked at all these combinations, known as N-grams, and clustered them into different overall topics.

The court’s decisions include lengthy sections recapping not only the factual background of the cases but also the original arguments made by the parties in the case. This gave the AI a broad sense of what each text was talking about and gave it the context necessary to predict the outcome of the case, which it did correctly in nearly four out of every five cases.

But that doesn’t mean the researchers are hoping to see AI judges anytime soon.

“We’re not advocating for automating any decisions,” said Preoţiuc-Pietro. “Decisions should still be made by the judges.” Where the AI can make a difference is in helping determining which cases make it to the judges in the first place.

More Artificial Intelligence Writes Extremely Bad Harry Potter Fan Fic

In 2015, the researchers found that nearly 85,000 petitions were submitted to the court, of which just 891 were actually decided upon. All the rest were thrown out as inadmissible, meaning the court couldn’t take them on and the previous decision by a lower court would have to stand. The European Court of Human Rights relies both on individual judges and committees to work through all these cases and figure out which are worth bringing to the actual court’s attention. Last year, that meant the entire court apparatus had to process more than 230 cases every single day, making it a huge challenge just to give each petition the human attention it deserves.

Artificial intelligence, by contrast, could zip through 85,000 petitions and decide which were most likely to be worth the court’s time, based on how similar each petition is to the court’s previous cases. Preoţiuc-Pietro suggested the algorithm could separate the cases into three groups based on the court’s prior history: those the court would likely rule on, those it likely would rule inadmissible, and those in a gray area. Committees could then devote more time to examining the cases already identified as being of uncertain status, rather than having them take valuable time doing all their own categorization.

“These committees are time-limited and beyond that very costly, so they can actually look at just the flagged cases which are more likely to be disputed and analyze them more thoroughly,” said Preoţiuc-Pietro, “while the others they can be sent for just individuals and they don’t need to be scrutinized by more people.”

The goal then wouldn’t be to take the human element out of the law, but instead the complete opposite: The European Court of Human Rights and other bodies like it would have more time to focus more time on its most difficult cases, while the AI would separate out the cases that would likely just get thrown out anyway.

www.crimefiles.net

Russian_Girl_1_728_90

hs-sig-red-on-white

Henry Sapiecha

 

Fault Lines – Cyber-war video report

Cyberwar. A conflict without footsoldiers, guns, or missiles.

Instead the attacks are launched by computer hackers. Digital spy rings. Information thieves. Cyberarmies of kids, criminals, terrorists – some backed by nation states.

In the US there Is a growing fear that they pose a massive threat to national security, and a conviction that the world’s military superpower must prepare for the fight ahead.

At stake: Crucial national infrastructure, high value commercial secrets, tens of billions of dollars in defence contracts, as well as values like privacy and freedom of expression.

In this episode of Fault Lines, Josh Rushing enters the domain of “cyber” and speaks to a former US national security official turned cybersecurity consultant, a Silicon Valley CEO, a hacker, and those who warn of a growing arms race in cyberspace.

He asks: Is the US contributing to the militarisation of cyberspace? Are the reports of cyber threats being distorted by a burgeoning security industry? And are the battles being waged in cyberspace interfering with the Internet as we know it?

People featured in this film include: Josh Rushing, John Fraize, Darrel Covell, Rsignia, Keith Alexander, Redbeard, John Verdi, Jay Rockefeller, Olympia Snowe, Jim Lewis, Enrique Salam, Michael Chertoff.

DDG

Henry Sapiecha