When Microsoft bought Twitch competitor Beam in 2016, it brought with it the promise of more social streaming features to its biggest game franchises. Arguably their biggest franchise has just received those capabilities on the rebranded Microsoft Mixer - Minecraft. While the alliteration might be enough for some, the new features are even more exciting.
As with other games that implement Mixer directly into their games, players can stream directly from within the game without any need for additional software. This allows for the game to implement the features that sets Mixer apart from its competitors: social control. Viewers have the ability to control the player's environment, spawn obstacles and even enemies.
In terms of Minecraft, viewers could potentially have the ability to change night to day, add elements to the player's environment, or even spawn monsters to challenge the player. This makes the process more automated and far less manual, like it would be on Twitch or YouTube Gaming, where viewers can only flood the chatroom with comments, hoping that enough of them will cause the player to trigger the event themselves.
For the player, however, there are ways to put limitations on what the viewers can do. If you don't want to allow viewers to add items to your landscape, you can turn it off. If you want to limit the ability to add enemies, you can. This gives control to both the streamer and viewer in tandem.
These new features are available now for Windows 10, Xbox One and Android version 1.2.5.
From time to time, a company has a really bad week. Not just one, but multiple incidents happen is succession that cause a lot of trouble. Sometimes companies weather these weeks just fine, and sometimes they take a major toll on the company's image for a long time. This week might have been the latter for Uber, with 2 major legal blows within very short order.
Uber revealed this week that they have been on the receiving end of a massive data breach, but they are not the most recent. With 57 million accounts violated, for both drivers and riders, it's the kind of breach that requires immediate attention, and immediate notification of those affected. Notification allows those affected to ensure that their passwords are safe, their credit cards are not being used, etc.
Unfortunately, Uber decided to handle the breach in a very different way. Disclosed this week, the company announced that the breach occurred in 2016, but the information was never disclosed. Instead, ousted former CEO Travis Kalanick decided to pay the hackers $100,000 for the promise that they would delete the data. That isn't exactly how hackers work, though, so you're still going to want to verify that your information is safe.
Current CEO Dara Khosrowshahi discovered the issue and was surprised to find out that there was a breach that was never disclosed. He immediately set about to see how the company handled it, and was not happy. In his public statement, he said,
You may be asking why we are just talking about this now, a year later. I had the same question, so I immediately asked for a thorough investigation of what happened and how we handled it.
In response to his findings, he said that two security employees were no longer with the company, including Chief Security Officer Joe Sullivan. He continued, saying,
None of this should have happened, and I will not make excuses for it. While I can't erase the past, I can commit on behalf of every Uber employee that we will learn from our mistakes. We are changing the way we do business, putting integrity at the core of every decision we make and working hard to earn the trust of our customers.
Bad Background Checks
This particular one is not new for the company. Uber has had several run-ins with riders and attorneys claiming that background checks have been incomplete, inaccurate or, in one case, not run. There was even a period of time where Uber had neglected to make any decisions based on those background checks, allowing drivers with violent pasts, DWI arrests and even no driver's license to drive under the company's brand.
This week, another batch of bad drivers has been revealed by the Colorado Public Utilities Commission. After a driver assaulted a rider in Vail, the commission opened up an investigation in to the company's business practices and announced that 57 drivers had been allowed to drive for the company that should not have been. According to the report,
PUC staff found that Uber allowed individuals to drive with previous felony convictions, major moving violations (DUI, DWI, reckless driving, driving under restraint), and numerous instances of individuals driving with suspended, revoked or cancelled driver's licenses.
One of the drivers in question was even an escaped convict. All of these issues would obviously come out in even the least detailed of background checks. Half of them can be discovered simply by reading a local newspaper. Because of the obvious oversight, or possibly purposeful ignorance, the state has fined Uber $8.9 million.
According to Stephanie Sedlak, a spokesperson for Uber,
We recently discovered a process error that was inconsistent with Colorado's ridesharing regulations and proactively notified the Colorado Public Utilities Commission (CPUC).
This error affected a small number of drivers and we immediately took corrective action. Per Uber safety policies and Colorado state regulations, drivers with access to the Uber app must undergo a nationally accredited third-party background screening. We will continue to work closely with the CPUC to enable access to safe, reliable transportation options for all Coloradans.
The strangest part of this statement is the suggestion that, if it weren't for Colorado's regulations, Uber would have had no issue with letting these drivers continue. That does not instill a lot of confidence in the company's morals or safety processes. It would seem that there are certain universal truths that would fail a potential driver from contention, and that would include escaped convicts, violent felons and those who are legally not permitted to drive any vehicle.
Over the last 15 years, one of the most successful movie franchises has been The Lord of the Rings. In that time, Peter Jackson has created 6 films - 3 from the original trilogy and 3 from The Hobbit. Between these film series, however, there is a veritable treasure trove of additional content. In particular, The Silmarillion.
Enter Amazon Studio, who has paid $250 million to purchase the rights to produce a television series in the Lord of the Rings universe. The series is not certainly being produced from the overwhelmingly large collection of content about Middle Earth contained within The Silmarillion, but it is a good guess. It is the largest repository of canon content set between the two major stories, and Amazon has confirmed that the series will take place in the same gap.
Like other Amazon Originals, it will be available to Amazon Prime Members, though an arrival date is not yet available. Based on the massive cost of the project, both monetarily and in time, as well as the early stage of production, it would not be out of the question for the first season to premiere in late 2019.
Even before production starts, however, Amazon has already made a multi-season commitment to the project. That means that, no matter how well the first season does, we are guaranteed at least a second season. If the show does as well as Amazon is obviously expecting, there is also talk of a spin-off series, which could bring the focus of the story down a new path, but there are a lot of variables before that could be a possibility.
As a long-term fan of Middle Earth, I am both excited for and worried about this project. There is a lot of possibility for success, but it would not be the first time a Middle Earth series was a disaster. The estate was so embarrassed by the last project, we almost didn't get the Peter Jackson series at all. I will be watching this production with anticipation and trepidation.
Over the past few years, the idea of a "bug bounty program" has grown quickly. Microsoft, Apple and Google all offer money for finding issues in their software, but smaller companies have taken to introducing similar programs. Unfortunately, most companies have not managed them in a detailed or responsible manner. Case in point, DJI, manufacturer of the Phantom quadcopter drone line. The company released their program in August, but never really explained what might be included. Some companies look for firmware issues, while others encourage server research.
Kevin Finisterre decided he would reach out to the company, looking for details on the program. After some back-and-forth, it was made clear that server issues were included in the program. So, Finisterre set out to find issues in what is becoming an increasingly dangerous place for security breach data: GitHub. As expected, Finisterre was able to find SSL certificate information, as well as public and private keys for Amazon Web Services.
After communicating his findings, which were detailed and extensive, with the company, he was offered a job consulting on security. That was, until the legal department got involved, and the entire tone of the conversation changed. Instead of a job, the company offered legal action against him for hacking. They even sent over a contract that was insulting at best. It required him to be silent on the topic, and promised no protection from legal action for finding the data in his report. He said of the interaction,
In the days following no less than 4 lawyers told me in various ways that the agreement was not only extremely risky, but was likely crafted in bad faith to silence anyone that signed it. I went through various iterations to get the letter corrected. It was ultimately going to cost me several thousand dollars for a lawyer that I was conﬁdent could cover all angles to put my concerns to bed and make the agreement sign-able.
After refusing to sign the contract and turning down a $30k bounty, Finisterre instead published his findings and his interactions with the company. The company, on the other hand, began a smear campaign against Finisterre, publishing a statement calling him a "hacker" and diminishing his findings.
DJI is investigating the reported unauthorized access of one of DJI's servers containing personal information submitted by our users. As part of its commitment to customers' data security, DJI engaged an independent cyber security firm to investigate this report and the impact of any unauthorized access to that data. Today, a hacker who obtained some of this data posted online his confidential communications with DJI employees about his attempts to claim a "bug bounty" from the DJI Security Response Center.
DJI implemented its Security Response Center to encourage independent security researchers to responsibly report potential vulnerabilities. DJI asks researchers to follow standard terms for bug bounty programs, which are designed to protect confidential data and allow time for analysis and resolution of a vulnerability before it is publicly disclosed. The hacker in question refused to agree to these terms, despite DJI's continued attempts to negotiate with him, and threatened DJI if his terms were not met.
This interaction underscores several issues plaguing the software industry. First is the open sourcing of software by irresponsible developers. When developers don't know the proper process for making code public, things go wrong, such as releasing database connection strings, cloud keys and more. This can make very private information, such as drivers licenses and passports in this case, available to the public.
The second issue is poorly implemented bounty programs. If a company does not have a detailed user guide for their program, it is easy for it to turn sour, especially when a bug or security issue embarrasses the company. A reward can turn into a lawsuit or, worse yet, criminal charges. This can ruin a developer or security expert's career in perpetuity.
Based on the popularity of Star Wars: Battlefront, there was a lot of excitement from the gaming community for the follow-up. That is, right up until details about how the mechanics of the game would work became public. Following the latest trend in gaming, EA introduced micro-transactions into their flagship AAA title, something that is usually reserved for casual mobile games.
This decision began an immediate backlash from the gaming community, even before the extent of the decision had been revealed. As the release date came closer, and players got a look at the game, it was revealed that large portions of the game, including prime characters, were locked behind an in-game currency system. For example, if you would like to bring Vader into the game, it will cost you about 40 hours of play time. The other option, rather than using playtime to unlock characters, you could spend real-world money to purchase currency. If you wanted to unlock all aspects of the game, it would cost $2,100, making the $80 game cost relatively insignificant. You could also play the game for 4,528 hours, or 189 days.
In response, the internet, and particularly Reddit, took EA to task, accusing them of theft and refusing to purchase the game. Initially, EA didn't seem to take notice, but, in the end, EA did seem to realize the problem they had created. The company released a statement, in which they vaguely apologized for misunderstanding how players wanted to be charged for additional content.
We hear you loud and clear, so we're turning off all in-game purchases. We will now spend more time listening, adjusting, balancing and tuning. This means that the option to purchase crystals in the game is now offline, and all progression will be earned through gameplay. The ability to purchase crystals in-game will become available at a later date, only after we've made changes to the game. We'll share more details as we work through this.
In addition to removing crystals from the game, for now, they have also changed the rate at which in-game currency is rewarded. In fact, they have cut the requirements by 75%, meaning that the 189 days is now only 47 days of playtime. If someone plays the game 3 hours per day, every day, it would now take just over a year, as opposed to nearly 4.5 years previously. As for now, this is a win for gamers, but will not last forever.
The market for kids' wearables is growing and myriad. Every year at CES we encounter at least one company showing off a wearable device that is designed to make kids safer and parents more at ease. Usually they are shaped like a watch, but not all of them offer screens. Most allow a parent to track the child via GPS, some allow parents to communicate with their kids, and some allow parents to listen in on their kids.
This last feature, while uncommon here in the US, has raised concern in Germany. While some helicopter parents have begun listening in on their kids' teachers, the government worries that it could be taken one step farther: espionage. Yes, that's right - the German government is worried about people listening in on all of the top-secret meetings that 8-year-olds are taking.
In reality, the concerns over privacy with these types of devices is legitimate. Several models of these watches, and other IoT transmitters, have been found to transmit and store data unencrypted. This means, especially in the case of children, that it could become really easy to track a child's location or listen to a child's environment, without anyone being the wiser. If the child in question is the kid of a public figure, tracking the child could be like tracking the parent, creating a double security threat.
In addition to banning the sale of these products, the government has encouraged parents to take them from their kids and destroy them. This might be an extreme reaction to a hypothetical problem. In fact, it seems that, rather than banning the devices entirely, perhaps regulations to ensure the safety and security of the data, and the wearer, might make for a better plan. The privacy issues are not limited to kids' smartwatches - in fact they are potentially ever-present in all IoT devices, including adult watches.
Data security is the real topic here, not kids' smartwatches in particular. As a whole, we need to encourage manufacturers of IoT devices to pay more attention to what they are doing, and to respect the privacy of the people who buy their products, not to demonize a single aspect of the industry.