Monthly Archives: May 2018

Mobile phone tracking firm exposed millions of Americans’ real-time locations. Is Australia in the loop??

The bug allowed one Carnegie Mellon researcher to track anybodies mobile cell phone in real time

A bug allowed anyone to skip a consent requirement in a cell phone location tracking site. (Image: ZDNet)

A company that collects the real-time location data on millions of cell phone customers across North America had a bug in its website that allowed anyone to see where a person is located — without obtaining their consent.

Earlier this week, we reported that four of the largest cell giants in the US are selling your real-time location data to a company that you’ve mare than likely never heard of before.

Read also: Evidence of stingrays found in DC, Homeland Security says

The company, LocationSmart, is a data aggregator and claims to have “direct connections” to cell carriers to obtain locations from nearby cell towers. The site had its own “try-before-you-buy” page that lets you test the accuracy of its data. The page required explicit consent from the user before their location data can be used by sending a one-time text message to the user. When we tried with a colleague, we tracked his phone to a city block of his actual location.

But that website had a bug that allowed anyone to track someone’s location covertly without their permission.

“Due to a very elementary bug in the website, you can just skip that consent part and go straight to the location,” said Robert Xiao, a PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University, in a phone conversation.

“The implication of this is that LocationSmart never needed consent in the first place,” he said. “There seems to be no security oversight here.”

The “try” website was pulled offline after Xiao discreetly disclosed the bug to the company, with help from CERT, a public vulnerability database, also at Carnegie Mellon.

US cell carriers are selling access to your real-time phone location data

The company embroiled in a privacy row has “direct connections” to all major US wireless carriers, including AT&T, Verizon, T-Mobile, and Sprint — and Canadian cell networks, too.

Read More

Xiao said the bug could have exposed nearly every cell phone customer in the US and Canada, some 200 million customers.

The researcher said he started looking at LocationSmart’s website following ZDNet’s report this week, which followed a story from The New York Times that revealed how a former police sheriff snooped on phone location data from Securus, a customer of LocationSmart, & not having a warrant.

The sheriff has pleaded not guilty to charges of unlawful surveillance.

Xiao said one of the APIs used in the “try” page that allowed users to try the location feature out was not validating the consent response properly. Xiao said it was “trivially easy” to skip the part where the API sends the text message to the user to obtain their consent.

“It’s a surprisingly simple bug,” he said.

Xiao showed ZDNet a video of a script he built exploiting the bug in the company’s API.

LocationSmart did not promptly respond to a request for comment.

Xiao verified the bug with a few people he knew. Brian Krebs, who first reported the story earlier today, also verified the bug with a number of people who allowed him to test the bug.

“None of them got any notification that their location was being tracked,” he said.

“I had a friend who was driving around Hawaii and [with permission] pinged the location and I could watch the marker move around the island,” he said. “It’s the kind of thing that sends eirrie chills down your spine.”

Read also: Stingray spying: 5G will protect you against surveillance

Sen. Ron Wyden (D-OR), who last week called on the cell carriers to stop exchanging data with third parties, offered a statement.

“This leak, coming only days after the lax security at Securus was exposed, demonstrates how little companies throughout the wireless ecosystem value Americans’ security,” said Wyden.

“It represents a clear and current danger, not just to privacy but to the financial and personal security of every American individual. Because they value profits above the privacy and safety of the Americans whose locations they traffic in, the wireless carriers and LocationSmart appear to have allowed nearly any hacker with a basic knowledge of websites to track the location of any American with a cell phone,” he said.

Wyden said the dangers from LocationSmart and other companies “are boundless.”

“If the FCC refuses to act after this revelation then future crimes against Americans will be the commissioners’ heads,” he said.

We reached out to the cell providers — AT&T, Verizon, and Sprint — which all said they were investigating. T-Mobile did not respond to a request for a reaction.

But this recently disclosed bug shows the carriers are yet to cut off any access — if at all.

Henry Sapiecha

YouTube & Facebook are struggling to keep billions under control

YouTube, Facebook and many other media platforms are facing some issues: a lot of undesirable material is distributed through their channels. That has always been the case but, recently, it has become a threat since extremists of all sorts have begun to utilise their channels to spread propagandist and violence-glorifying content. As new privacy laws are passed and advertising sponsors put on the pressure, the media giants have to either discover better ways to handle the deluge of user posts or risk hefty fines. Artificial intelligence (AI) has been touted as the magic silver bullet but are algorithms really the solution?

To give you an idea of the scale I’m referring to: 500 hours of video content are uploaded to YouTube every single minute – and counting. It would require hundreds of thousands of workers to review and, if necessary, delete them. And it’s a golden opportunity to become a big employer – Google has the funds after all! Instead of just a measly 80,000 employees world-wide, 2,500,000 additional jobs could be created to give back a few of those billions to society. Naturally, that’s out of the question. Profits would be declining and share holders surely threaten with self-immolation. That’s why Google is leaving this issue in the arena of technology.

Here’s the plan: human workers have flagged 2 million videos for deletion by adding certain markers to further specify the cause. Self-learning machines analyze the data and scan both audio and video tracks to learn about humans and objects in context (or situations). Even text overlays along with political or religious symbols are recognized. The objective: to find and remove violence-glorifying content, terrorist propaganda, hate speech, SPAM and, naturally, nudity.

Today, AI artificial intelligence has already replaced much of the human workforce.

The algorithms are continuously refined with each iteration. Which videos are showing a bombing, swastika or uncovered female breast? In the past, censors were already quite swift when it came to pornography but other illegal content is now also slowly being focussed on. Affected videos are marked and later wiped from the portal. Of over 8 million recently deleted videos, a whopping 6,6 million were identified through AI while human workers and user feedback did the rest. Many videos hadn’t even become publicly viewable yet, while the video portal is celebrating, the devil is in the detail.

Lately, problem cases have been piling up since the technology doesn’t always act as intended. War crime documentaries that serve to foster education were erroneously deleted and so were historical movies. The algorithms detected the depiction of inhuman practices but failed to grasp the intention behind the movies. Such are the limits of AI to this day: it can spot questionable content but it can’t decipher the rationale behind it (yet). The same applies to nudity: nude paintings, as common in the fine arts, also met with disapproval from the virtual jury and were likewise deleted. After all, how can algorithms tell the difference between artful nudity and obscene home videos? It seems the system can’t do without common (human) sense just yet.

Which of the countless online videos contain illegal content?

Satire is also beyond a machine’s understanding & comprehension. While many of us can laugh at Monty Python’s Nazi jokes, computers are totally devoid of any sense of humour. The closer the jokes stick to the “original”, the quicker they’ll face auto delete. That’s why many users see signs of of a digital inquisition on the horizon. Though they welcome YouTube’s struggle to no longer be a cesspool of extremist, hateful or confused minds, they criticize the shotgun approach exhibited by the AI. Today, investigative journalists, researchers or organizations that document war & other crimes are facing permanent suspension of their channels. Even G-rated garden party videos are deleted because the AI misinterprets bare skin. By contrast, videos uploaded by pedophiles stay up because these people know how to exploit the AI’s weaknesses through subtlety. No algorithm can decipher the many possible shades to a topic (yet). Google had penalised my site with no adsense adverts because I document crimes here

It seems, human workers will remain indispensable for some time to come to evaluate said shades and YouTube will have to comply with some form of binding standard to stay relevant. They will also have to be more open & transparent: presently, users receive no explanation as to why their videos were blocked. YouTube has vowed to respond faster to questions and to provide insights into the implementation of their guidelines. That should be a given, but, in the case YouTube, it actually means progress. They’ve also recruited added staff, if only in dribs & drabs. Apparently, YouTube themselves don’t trust their AI too much and that’s at least comforting.

What we should maybe like to know: is do we believe AI artificial intelligence to be adopted here or is common (human) sense still necessary?

Henry Sapiecha