Yesterday, we reported on Windows Cloud — a new version of Microsoft’s Windows 10 that’s supposedly in the works. Windows Cloud would be limited to applications that are available through the Windows Store and is widely believed to be a play for the education market, where Chromebooks are currently popular.
Tim Sweeney, the founder of Epic and lead developer on the Unreal Engine, has been a harsh critic of Microsoft and its Windows Store before. He wasted no time launching a blistering tirade against this new variant of the operating system, before Microsoft has even had a chance to launch the thing.
With all respect to Tim, I think he’s wrong on this for several reasons. First, the idea that the Windows Store is going to crush Steam is simply farcical. There is no way for Microsoft to simply disallow Steam or other applications from running in mainstream Windows without completely breaking Win32 compatibility in its own operating system. Smartphone manufacturers were able to introduce the concept of app stores and walled gardens early on. Fortune 500 companies, gamers, enthusiasts, and computer users in general would never accept an OS that refused to run Win32 applications.
The second reason the Windows Store is never going to crush Steam is that the Windows Store is, generally speaking, a wasteland where software goes to die. The mainstream games that have debuted on that platform have generally been poor deals compared with what’s available on other platforms (like Steam). There’s little sign Microsoft is going to change this anytime soon, and until it does, Steam’s near-monopoly on PC game distribution is safe.
Third, if Microsoft is positioning this as a play against Chrome OS, Windows Cloud isn’t going to debut on high-end systems that are gaming-capable in the first place. This is a play aimed at low-end ARM or x86 machines with minimum graphics and CPU performance. In that space, a locked-down system is a more secure system. That’s a feature, not a bug, if your goal is to build systems that won’t need constant IT service from trojans, malware, and bugs.
Like Sweeney, I value the openness and capability of the PC ecosystem — but I also recognize that there are environments and situations where that openness is a risk with substantial downside and little benefit. Specialized educational systems for low-end markets are not a beachhead aimed at destroying Steam. They’re a rear-guard action aimed at protecting Microsoft’s educational market share from an encroaching Google.
SQL Server 2016, Microsoft’s newest database software, is set to become available on June 1 along with a no-cost, developers-only version.
With its new features and revised product editions, Microsoft is determined to expand SQL Server appeal to the largest possible number of customers running in a range of environments. But there’s still no word on the promised SQL Server for Linux, a version of the popular database that Microsoft is hoping will open SQL Server to an entirely new audience.
A broader SQL Server market awaits
Much of what’s new in SQL Server 2016 is aimed at roughly two classes of users: those doing their data collection and storage in the cloud (or moving to the cloud) and those doing analytics work that benefits from being performed in-memory. Features like Stretch Database will appeal to the former, as SQL Server tables can be expanded incrementally into Microsoft Azure — a more appealing option than a disruptive all-or-nothing migration.
Big data features include expanded capabilities for the Hekaton in-memory functions introduced in SQL Server 2014, plus in-memory columnstore functions for real-time analytics. And SQL Server’s close integration with the R language tools that Microsoft recently acquired opens up the database to a range of new applications from a thriving software ecosystem.
The forthcoming Linux version of SQL Server, though, is how Microsoft really plans to expand to an untapped market. And not just Linux users, but a specific kind of Linux user: those who use Oracle on Linux but are tired of Oracle’s unpredictable licensing. Oracle has been trying to change its tune, but there’s a lot to be said for being able to run SQL Server without also needing to run Windows.
Which versions and when?
Two big questions still remain about SQL Server for Linux. The first is when will it see the light of day; Microsoft hasn’t provided a timeframe yet. (A Microsoft spokesperson could provide no new comment.)
The second is what its pricing and SKUs will look like; will the feature set match what’s available on Windows or will it be a stripped-down version? Microsoft has versions of SQL Server to match most any workload or budget, from the free-to-use Express edition to the full-blown Enterprise variety.
With SQL Server 2014 — and now with 2016 as well — the company introduced a free-to-use developer version of the Enterprise SKU intended solely for dev and testing work. It’s unclear whether SQL Server on Linux will also include a developer version or only include editions specifically for commercial use.
Whatever happens with SQL Server on Linux, Microsoft’s already making aggressive efforts to woo Oracle users into its camp. The company has a limited-time Oracle-to-SQL-Server migration offer, where Microsoft Software Assurance customers can swap Oracle licenses for SQL Server licenses at no cost. It’ll be intriguing if a similar offer pops up again after Microsoft releases SQL Server for Linux.
Snapchat is now accessing its users’ offline purchase data to improve the targeting of its ads, despite its CEO having previously deemed this kind of advertising “creepy.”
Following in the footsteps of tech and social media giants such as Facebook, Twitter, and Google, Snap Inc has partnered with a third party offline data provider called Oracle Data Cloud according to the Wall Street Journal.
This partnership will allow Snapchat advertisers to access data about what users buy offline in order to more accurately target ads.
Snapchat gets specific
Now rather than seeing generally less invasive advertisements appear on Snapchat which have a broad consumer appeal, you’re more likely to see ads that make you think “how did they know?” as you’ll now be assigned a specific consumer demographic such as “consumer tech purchaser.”
This decision shows the company is taking its growth seriously as it’s a different approach CEO Evan Spiegel laid out in June 2015. Back then, Spiegel stated his distaste for such personalized advertising saying “I got an ad this morning for something I was thinking about buying yesterday, and it’s really annoying. We care about not being creepy. That’s something that’s really important to us.”
Now, however, Snap Inc has to do all it can to guarantee that its stock is worth buying when it goes public later this year. Such an advertising approach is a good way to do so because it should make Snapchat a more attractive option to advertisers as targeted adverts are more likely to earn more per view.
Fortunately, if this kind of advertising doesn’t sit well with you whether because you consider it invasive or because you’re just incredibly susceptible, Snapchat is giving its users the ability to opt out. It’s already started rolling out the changed adverts so you’ll be able to change it now.
To do so, simply go into the settings section within the Snapchat app, go to Manage Preferences, select Ad Preferences and switch off the Snap Audience Match function.
There’s some good news for privacy-minded individuals who haven’t been fond of Microsoft’s data collection policy with Windows 10. When the upcoming Creators Update drops this spring, it will overhaul Microsoft’s data collection policies. Terry Myerson, executive vice president of Microsoft’s Windows and Devices Group, has published a blog post with a list of the changes Microsoft will be making.
First, Microsoft has launched a new web-based privacy dashboard with the goal of giving people an easy, one-stop location for controlling how much data Microsoft collects. Your privacy dashboard has sections for Browse, Search, Location, and Cortana’s Notebook, each covering a different category of data MS might have received from your hardware. Personally, I keep the Digital Assistant side of Cortana permanently deactivated and already set telemetry to minimal, but if you haven’t taken those steps you can adjust how much data Microsoft keeps from this page.
Second, Microsoft is condensing its telemetry options. Currently, there are four options — Security, Basic, Enhanced, and Full. Most consumers only have access to three of these settings — Basic, Enhanced, and Full. The fourth, security, is reserved for Windows 10 Enterprise or Windows 10 Education. Here’s how Microsoft describes each category:
Security: Information that’s required to help keep Windows, Windows Server, and System Center secure, including data about the Connected User Experience and Telemetry component settings, the Malicious Software Removal Tool, and Windows Defender.
Basic: Basic device info, including: quality-related data, app compatibility, app usage data, and data from the Security level.
Enhanced: Additional insights, including: how Windows, Windows Server, System Center, and apps are used, how they perform, advanced reliability data, and data from both the Basic and the Security levels.
Full: All data necessary to identify and help to fix problems, plus data from the Security, Basic, and Enhanced levels.
That’s the old system. Going forward, Microsoft is collapsing the number of telemetry levels to two. Here’s how Myerson describes the new “Basic” level:
[We’ve] further reduced the data collected at the Basic level. This includes data that is vital to the operation of Windows. We use this data to help keep Windows and apps secure, up-to-date, and running properly when you let Microsoft know the capabilities of your device, what is installed, and whether Windows is operating correctly. This option also includes basic error reporting back to Microsoft.
Windows 10 will also include an enhanced privacy section that will show during start-up and offer much better granularity over privacy settings. Currently, many of these controls are buried in various menus that you have to manually configure after installing the operating system.
It’s nice that Microsoft is cutting back on telemetry collection at the basic level. The problem is, as Stephen J Vaughn-Nichols writes, Microsoft is still collecting a creepy amount of information on “Full,” and it still defaults to sharing all this information with Cortana — which means Microsoft has data files on people it can be compelled to turn over by a warrant from an organization like the NSA or FBI. Given the recent expansion of the NSA’s powers, this information can now be shared with a variety of other agencies without filtering it first. And while Microsoft’s business model doesn’t directly depend on scraping and selling customer data the way Google does, the company is still gathering an unspecified amount of information. Full telemetry, for example, may “unintentionally include parts of a document you were using when a problem occurred.” Vaughn-Nichols isn’t thrilled about that idea, and neither am I.
The problem with Microsoft’s disclosure is it mostly doesn’t disclose. Even basic telemetry is described as “includes data that is vital to the operation of Windows.” Okay. But what does that mean?
I’m glad to see Microsoft taking steps towards restoring user privacy, but these are small steps that only modify policies around the edges. Until the company actually and meaningfully discloses what telemetry is collected under Basic settings and precisely what Full settings do and don’t send in the way of personally identifying information, the company isn’t explaining anything so much as it’s using vague terms and PR in place of a disclosure policy.
As I noted above, I’d recommend turning Cortana (the assistant) off. If you don’t want to do that, you should regularly review the information MS has collected about you and delete any items you don’t want to part of the company’s permanent record.
Norway will become the first country in the world this week to start turning off its FM radio network as the country moves to a digital-only broadcasting system.
On January 11, the city of Bodø, in the northern county of Nordland, will be the first to have its signal shut off, with the rest of the nation’s signal being closed down by the end of the year. The country has been split into six regions for the turn-off.
Frequency Modulation was first invented in 1933 and more widely introduced in the 1950s. It is commonly broadcast between the radio frequencies 87.5 to 108.0 MHz.
“The fact that the FM network will be phased out does not mean radio silence in Norway,” Digital Radio Norway says on its website. Instead, the organisation claims there will be five times the amount of radio channels available.
The radio group says it would take “huge” investments to bring the existing FM standard to a higher quality and the last Norwegian channels were launched on FM in 2004 and 1993. Instead of FM, the country will be moving to DAB (Digital Audio Broadcasting). The format, which is used in the UK alongside FM, was created by researchers in the 1980s.
“A lot of work has been done during the preparations to ensure a good replacement is in place,” Ole Jørgen Torvmark, the CEO of Digital Radio Norway said. “The DAB network has been thoroughly measured and adjusted, and a great deal of information has been made available to listeners”.
Despite the changes – it was approved by Norway’s parliament, which first floated the idea in 2011 – not everyone is in favour. According to Reuters, 66 per cent of the country opposes the switch-off, with only 17 per cent approving of the digital-only method.
Cars are said to be one of the biggest issues for those in the country. One critic of the plan said in 2016 that the move to digital-only was “embarrassing”. “Norwegian politicians have decided to make 15 million FM radios useless. It’s a bad idea,” Jan Thoresen, a digital expert, said in an opinion piece.
Norway is the first country to implement the digital switch but isn’t the only one considering it. Switzerland and Denmark are also considering a change and the UK has been having discussions about a digital radio policy for years.
The UK’s digital TV switchover finished in 2012 but radio has been slower. At one point, a digital radio switch had been planned for 2015. However, before the change happens at least 50 per cent of UK radio listening must come from digital radios and signal coverage has to be comparable to that of the FM network.
We can create software with 100 times fewer vulnerabilities than we do today, according to computer scientists at the National Institute of Standards and Technology (NIST). To get there, they recommend that coders adopt the approaches they have compiled in a new publication.
The 60-page document, NIST Interagency Report (NISTIR) 8151: Dramatically Reducing Software Vulnerabilities (link is external), is a collection of the newest strategies gathered from across industry and other sources for reducing bugs in software. While the report is officially a response to a request for methods from the White House’s Office of Science and Technology Policy, NIST computer scientist Paul E. Black says its contents will help any organization that seeks to author high-quality, low-defect computer code.
“We want coders to know about it,” said Black, one of the publication’s coauthors. “We concentrated on including novel ideas that they may not have heard about already.”
Black and his NIST colleagues compiled these ideas while working with software assurance experts from many private companies in the computer industry as well as several government agencies that generate a good deal of code, including the Department of Defense and NASA. The resulting document reflects their cumulative input and experience.
Vulnerabilities are common in software. Even small applications have hundreds of bugs (link is external) by some estimates. Lowering these numbers would bring many advantages, such as reducing the number of computer crashes and reboots users need to deal with, not to mention decreasing the number of patch updates they need to download.
The heart of the document, Black said, is five sets of approaches, tools and concepts that can help, all of which can be found in the document’s second section. The approaches are organized under five subheadings that, despite their jargon-heavy titles, each possess a common-sense idea as an overarching principle (see downloadable infographic).
These approaches include: using math-based tools to verify the code will work properly; breaking up a computer’s programs into modular parts so that if one part fails, the whole program doesn’t crash; connecting analysis tools for code that currently operate in isolation; using appropriate programming languages for the task that the code attempts to carry out; and developing evolving and changing tactics for protecting code that is the target of cyberattacks.
In addition to the techniques themselves, the publication offers recommendations for how the programming community can educate itself about where and how to use them. It also suggests that customers should request the techniques be used in development. “You as a consumer should be able to write it into a contract that you want a vendor to develop software in accordance with these principles, so that it’s as secure as it can be,” Black said.
Security is, of course, a major concern for almost everyone who uses technology these days, and Black said that the White House’s original request for these approaches was part of its 2016 Federal Cybersecurity R&D Strategic Action Plan, intended to be implemented over the next three to seven years. But though ideas of security permeate the document, Black said the strategies have an even broader intent.
“Security tends to bubble to the surface because we’ve got adversaries who want to exploit weaknesses,” he said, “but we’d still want to avoid bugs even without this threat. The effort to stymie them brings up general principles. You’ll notice the title doesn’t have the word ‘security’ in it anywhere.”
Those who wondered what it would be like to run Microsoft SQL Server on Linux now have an answer. Microsoft has released the first public preview of the long-promised product.
Microsoft also wants to make clear this isn’t a “SQL Server Lite” for those satisfied with a reduced feature set. Microsoft has a four-point plan to make this happen.
First is through broad support for all major enterprise-grade Linux editions: Red Hat Enterprise Linux, Ubuntu Linux, and soon Suse Linux Enterprise Server. “Support” means behaving like other Linux applications on the distributions, not requiring a Microsoft-only methodology for installing or running the app. An introductory video depicts SQL Server installed on RHEL through the system’s yum package manager, and a white paper describes launching SQL Server’s services via systemd.
Second, Microsoft promises the full set of SQL Server 2016’s features for Linux users—not only support for the T-SQL command set, but high-end items like in-memory OLTP, always-on encryption, and row-level security. It will be a first-class citizen on Linux, as SQL Server has been on Windows itself.
Third is Linux support for the tooling around SQL Server—not SQL Server Management Studio alone, but also the Migration Assistant for relocating workloads to Linux systems and the sqlps PowerShell module. This last item is in line with a possibility introduced when PowerShell was initially open-sourced: Once ported to Linux, it would become part of the support structure for other big-name Microsoft applications as they, too, showed up on the OS. That’s now happening.
By bringing SQL Server to Linux, Microsoft can compete more directly with Oracle, which has long provided its product on Linux. Oracle may be blunting the effects of the strategy by shifting customers toward a cloud-based service model, but any gains are likely to be hard-won.
The other, immediate benefit is to provide Microsoft customers with more places to run SQL Server. Enterprises have historically run mixes of Linux and Windows systems, and SQL Server on Linux will let them shave the costs of running some infrastructure.
Most of all, Microsoft is striving to prove a Microsoft shop can lose little, and preferably nothing, by making a switch—and a new shop eyeing SQL Server has fewer reasons to opt for a competing database that’s Linux-first.
In a statement sent to The Wall Street Journal, a LeEco spokesperson stated that it is currently investigating why its stock price had dropped close to eight percent on Tuesday, and thus made the decision to halt trading. The spokesperson added that the company is “in the process of planning major matters, which are expected to involve integration of industry resources.” Exactly what types of changes are in the works are currently unknown.
Less than two months ago, LeEco was holding a big press conference in San Francisco, announcing its official launch of products in the US. That included its Le Pro3 and Le S3 Android smartphones. However, the press conference was an odd affair, full of unconnected buzzwords and demos of products like an Android-based bicycle that may now never come to market.
Since then, rumors about the company’s financial issues have continued, even though LeEco announced it had raised $600 million in new financing. Last week, sales of its phones began in the US at its own LeMall website, along with Amazon, Target and Best Buy. However, the company could decide to make an early exit from the US market as part of its upcoming changes.
The end of support for a product as wide-reaching as SQL Server can be a stressful time for the database administrators whose job it is to perform upgrades. However, two database experts see SQL Server 2005 end of life on April 12 as a blessing in disguise.
Bala Narasimhan, vice president of products at PernixData, and David Klee, founder and chief architect of Heraflux Technologies, said SQL Server 2005 end of life presents the opportunity DBAs need to take stock of their databases and make changes based on what newer versions of SQL Server have to offer.
SearchSQLServer spoke to Narasimhan and Klee about the best way for DBAs to take advantage of the opportunity that the end of support creates.
This is the first part of a two-part article. Click here for the second part.
How can we turn SQL Server 2005 end of life into an opportunity for DBAs?
Bala Narasimhanvice president of products at PernixData
David Klee: I’ve been a DBA. I’ve been a system administrator. I’ve been an IT manager and an architect, and a lot of these different components overlap. My biggest take on it, from the role of the DBA, is that their number one job is to make sure that the data is there when you need it. Secondly, it’s about performance. The upgrade process is, in my eyes, wonderful, because the new versions of SQL Server 2012 and 2014, soon to be 2016, give you a lot more options for enterprise level availability. [They simplify] things. [They give] you better uptime. [They give] you better resiliency to faults. These are features that are just included with [them].
What this is doing is giving people a good opportunity to get the stragglers out of their environment. I’m out in the field a lot. I do see a lot of 2005 machines out here. It’s one of those things where the management mindset is: “If it’s not broke, don’t fix it.” But, with end of life, end of support it’s pretty significant
Find more PRO+ content and other member only offers, here.
Shining a light on SQL Server storage tactics
Bala Narasimhan: I’m similar to David in terms of background, except that I did R&D at Oracle and other database companies. Since 2005, there has been a lot of innovation that has happened at a lot of database companies on the database side itself, but also on the infrastructure side holding these databases. I think it’s an opportunity to leverage all of that innovation as well. The end of life gives you a chance to look back at all of the innovations on the database side and on the infrastructure side as well. Sometimes, those innovations are complementary and sometimes they’re not. It gives you an opportunity to evaluate those and see what’s right for you in 2016.
In [SQL Server] 2014, there are features such as the columnstore and in-memory computing and all of that. … It may be the case that you can leverage similar functionality without having to upgrade to 2014, because there are other innovations happening in the industry. This may be another example of where you can step back and [ask yourself], “Should I upgrade to 2014 to get there? Or should I upgrade to 2012 because I don’t need it? Or is there another way to get the same capability?”
We’re both advocating for the right to tool for the job.
Klee: Exactly. I don’t think that there is a specific answer to that. I think it depends on what that particular DBA wants and what that particular business is trying to achieve. There are multiple ways to achieve that and this is giving you an opportunity to evaluate that.
What are your suggestions for how DBAs can best take advantage of this upgrade?
Narasimhan: This is a time to take a step back. I would recommend having a conversation that includes the DBA; the storage admin; and, if they’re virtualized, the virtualization admin as well and try to understand what all three are trying to achieve because, at the end of the day, you need to run the database on some kind of infrastructure. In 2005, it needn’t have been virtualized, but, in today’s world, it will most probably be virtualized. So, bring them all to the table and try to understand what they need to do from a database perspective and an infrastructure perspective.
Once you’ve done that, there are other conversations to have, such as: “Do we want to run an application rewrite?” For instance, if you’re going to upgrade from 2005 to 2014 because you want to leverage the in-memory capabilities of SQL Server, then you need to revisit your database schema. You need to potentially rewrite your application. There are the cardinality estimation changes that will cause a rewrite. Do you want to incur those costs? Sometimes the answer may be yes and sometimes the answer may be no. If so, it’s not required that you go to 2014. You can go to 2012.
Similarly, it’s a chance to say this application has evolved over time. The optimizer has changed in SQL Server. Therefore the I/O capabilities have changed. Maybe we should talk to the storage admin and the virtualization admin and figure out what kind of infrastructure we’ll need to support this application successfully post-upgrade.
I will, therefore, open up the conversation a little bit and bring other stakeholders to the table before deciding which way to go.
Klee: My take on it is pretty much aligned with that. It’s, essentially, look at the architectural choices that went into that legacy deployment — high availability, performance, virtualization or no virtualization. Revisit today and see if the technology has changed, or you can simplify some of those choices or even replace them with features that weren’t even around back in the day. Availability Groups, virtualization, even public cloud deployments, any of the in-memory technologies, they were just not around back in the 2005 days, and now they’re just extremely powerful and extremely useful.
The Windows 10 app, Torrent Platinum, is being given away for free for the next 24 hours in the Windows Store. The app works as a torrent client and functions on both Windows 10 and Windows 10 Mobile devices. Here’s the official app description:
Choose Torrent Platinum and download files comfortably! Torrent Platinum is a new easy to use and having a friendly interface the torrent client that will help you quickly and easily download different files (movies, music, TV shows, books and much more).
Torrent Platinum is: High speed download files; Modern design; Simple and intuitive interface.
You finally found the most convenient way to download content to your device! It now remains to click “Install” Torrent Platinum and enjoy!