Here’s everything that’s new in Windows 10 Insider build 14965 for PC and Mobile

Image result for Here’s everything that’s new in Windows 10 Insider build 14965 for PC and Mobile

Head of the Windows 10 Insider program Dona Sarkar has pushed her big red build release button today, granting eager testers a chance to play around with a new version of Windows 10. Windows 10 preview 14965 is currently rolling out to Insidersand here is a long list of new items for all to play with:

What’s new in Build 14965

Controlling external monitors from tablets just got easier (PC): You can now drive content on a second display from your tablet without ever having to attach a mouse. The virtual touchpad lets you do more with a tablet and a second screen – just connect to another monitor, PC, or TV, go to Action Center and tap on the “Project” Quick Action to extend your screen. Use it just like you would a physical touchpad to control content on the connected screen. To enable it, press and hold on the taskbar and

select “Show touchpad button”. A touchpad icon will now appear in the notification area (just like Windows Ink Workspace does), and tapping on it will bring it up the virtual touchpad.

You’ve seen the work we’ve been doing for precision touchpad customization with the last few builds, and the virtual touchpad is no exception. While the virtual touchpad is open, go to Settings > Devices > Touchpad and you’ll be able to tweak the touchpad settings to your preferences.

Sticky Notes update (PC): Windows Insiders in the Fast ring will receive will receive Sticky Notes app update to version 1.2.9.0 today and we’re very excited to share what it includes!

  • We’ve expanded our support for Insights to many more languages and regions, with even more to come in further updates, stay tuned! Specifically, with this version:
    • We’ve added flight detection for Germany (Germany), in more English locales (Canada, Great Britain, India, Arabia), Spanish (Spain & Mexico), French (France, Canada), Italian (Italy), Japanese (Japan), and Portuguese (Brazil).
    • We’ve added email & URL recognition for every locale (except Chinese (simplified or traditional), Korean, or Japanese, which we’re still working on).
    • We’ve added phone number recognition to all English, German, Spanish, French and Italian locales.
    • We’ve added address recognition support for English (Great Britain) and Spanish (United States).
    • We’ve added time recognition (prompting to create a Cortana reminder) to English (Great Britain), English (Australia) and English (India).
    • We’ve added stock recognition (for example, $MSFT) to English (Australia), English (Canada), English (India), German (Germany), Spanish (Spain), Spanish (Mexico), French (France), French (Canada), Japanese (Japan) and Portuguese (Brazil).
  • If Insights in Sticky Notes isn’t turned on automatically for you, tap “…” > Settings gear > “Enable insights”. Note: The language and region used to detect Insights in Sticky Notes is based off of the active keyboard. We’re currently investigating some issues where Insights may not show up as expected if you switch keyboards while typing in Sticky Notes.
  • We’ve fixed some issues with Undo and Redo (CTRL + Z/CTRL + Y) while typing, so they’ll now do so more reliably.
  • We’ve improved the performance of text input while typing.
  • It’s now easier than ever to get the latest Sticky Notes app updates. When our next update is available, we’ll show an in-app prompt so all you have to do is click ‘Update’.
  • We’ve also done a whole lot of UI/UX polishing and performance improvements that we hope you enjoy.

We’re been making a bunch of improvements based on your feedback, and have more to come, so let us know what you’d like to see next! In recent versions of Sticky Notes, we’ve added support for many of your favorite keyboard shortcuts, including CTRL + B (bold), CTRL + I (italic), CTRL + N (new note) and CTRL + D (delete note), added a new context menu for easy copy/paste, reduced the minimum note size for typists, as well as generally improved our reliability and performance.

Windows Ink Workspace Improvements (PC): This build includes a number of improvements to the Windows Ink Workspace.

  • We increased the number of Recently Used apps shown in the Windows Ink Workspace to 6, and added a link to quickly access your pen settings.
  • We’ve improved the performance of loading Sketchpad when there’s a lot of ink present on the sketch.
  • We’ve updated the new protractor, so that you can use the scroll-wheel on your mouse to shrink/expand it (depending on the direction of the scrolling).
  • We fixed an issue where, when using Sticky Notes in the Windows Ink Workspace, the background would ding when tapped.
  • We fixed an issue where inking and resizing the protractor at the same time would result in Sketchpad crashing.
  • We’ve updated the “Pen & Windows Ink” Settings for pen users to now include a link to the handwriting training tool – simply click on “Get to know my handwriting” to launch it. We’ve also improved how we learn from your handwriting samples – try it out and let us know what you think!

Enhancing the Address Bar in Registry Editor (PC): We were really excited to hear how excited you were about the new address bar in Registry Editor, and based on your feedback, we’ve already incorporated two new features:

  1. You can now use CTRL + L to set focus to the address bar – while we already supported ALT + D, we recognize that some people prefer this keyboard shortcut instead, so now you have the option to use either one
  2. You can now use shorthand notation for HKEY names – you told us that when sharing registry paths you always use shorthand notation (HKCU) instead of typing out the full HKEY name (i.e. HKEY_CURRENT_USER), so we should support them in the address bar, and you know what? We agree! You can now just use “HKCR”, “HKCU”, “HKLM”, and “HKU” instead of typing or pasting the respective full name “HKEY_CLASSES_ROOT”, “HKEY_CURRENT_USER”, “HKEY_LOCAL_MACHINE”, or “HKEY_USERS” into the address bar.

Improving Your Hyper-V VM experience (PC): Following the new VM scaling options mentioned in last week’s build, we’ve fixed an issue where depending on the zoom level selected, the VM window might not be created large enough to avoid scrollbars, despite there being enough space for it. We also updated the logic so that when you pick a particular zoom level, that zoom level preference will be preserved for the next VM connection. Along the way, we fixed an issue where the title bar of a maximized VM window would be occluded when taskbar had been set to on top.

While it may not be many of the cool new features shown during the Creators Update reveal a few weeks ago, the Windows team looks to be delivering a solid release offering with this build.

[Source:- Winbeta]

Uber takes its app down new road with redesign

Uber takes its app down new road with redesign

Uber is taking its ride-hailing app down a new road in an effort to make it smarter, simpler and more fun to use.

The redesigned app also will seek to mine personal information stored on smartphones in a change that could raise privacy concerns, even though it will be up to individual users to let Uber peer into their calendars and address books.

The change represents the biggest overhaul in four years to Uber’s popular app, which is used by millions of people to summon cars in more than 450 cities around the world for rides that are usually cheaper than traditional taxis.

But as Uber has grown, the app has been adding features that have made it more difficult to navigate. The new design and features are designed to save passengers time and money. The new app will begin to roll out Wednesday, though it could take a couple weeks before all users get the update.

As part of the new look, Uber will spell out more clearly how long it will take and how much it will cost to reach a destination in different types of available cars. The app will also recommend the best places to be picked up in congested areas.

The reprogrammed app also will study a rider’s traveling history and list frequently ordered destinations as “shortcuts.”

In another time-saving move that will test how much users trust the San Francisco company with their personal information, users will be able to give the app access to their calendars so addresses listed in an entry can automatically appear in the Uber app near the time of the appointment. Uber plans to introduce this option by next month.

Starting in December, Uber will also seek access to users’ personal contacts so they can ask for a ride to where a friend currently is, even if the friend isn’t home. If this feature is activated, Uber’s app will contact the friend to ask if he or she is willing to share the current location. If the friend doesn’t have the Uber app, the request will be sent through a text message to the mobile number listed in the address book.

Uber says it doesn’t expect privacy objections because users will have to agree to allow the app to scan their calendars and address books. And people whose locations are being sought through the new address-book feature will be able to decide if they want to share the information.

The redesigned app also will offer other features from other services that Uber riders might enjoy during the trip to their destination. The additions include the ability to check out restaurant reviews through Yelp, send messages through Snapchat and listen to music on Pandora.

[Source:- Phys.org]

The 7 most vexing problems in programming

The 7 most vexing problems in programming

It’s been said that the uncharted territories of the old maps were often marked with the ominous warning: “Here be dragons.” Perhaps apocryphal, the idea was that no one wandering into these unknown corners of the world should do so without being ready to battle a terrifying foe. Anything could happen in these mysterious regions, and often that anything wasn’t good.

Programmers may be a bit more civilized than medieval knights, but that doesn’t mean the modern technical world doesn’t have its share of technical dragons waiting for us in unforeseen places: Difficult problems that wait until the deadline is minutes away; complications that have read the manual and know what isn’t well-specified; evil dragons that know how to sneak in inchoate bugs and untimely glitches, often right after the code is committed.

There will be some who rest quietly at night, warmed by their naive self-assurance that computers are utterly predictable, earnestly churning out the right answers. Oh, how little they know. For all the hard work of chip designers, language developers, and millions of programmers everywhere, there are still thorny thickets of programming problems that can bring even the mightiest programmers to their knees.

Here are seven of the gnarliest corners of the programming world where we’d put large markers reading, “Here be dragons.”

Multithreading

It sounded like a good idea: Break your program into independent sections and let the OS run them like separate little programs. If the processors have four, six, eight, or even more cores, why not write your code so that it can have four, six, eight, or more threads using all of the cores independently?

The idea works—when the parts are in fact completely separate and have nothing to do with one another. But once they need to access the same variables or write bits to the same files, all bets are off. One of the threads is going to get to the data first and you can’t predict which thread it will be.

Thus, we create monitors, semaphores, and other tools for organizing the multithreaded mess. When they work, they work. They merely add another layer of complexity and turn the act of storing data in a variable into an item that requires a bit more thought.

When they don’t work, it’s pure chaos. The data doesn’t make sense. The columns don’t add up. Money disappears from accounts with a poof. It’s all bits in memory. And good luck trying to pin down any of it. Most of the time developers end up locking down big chunks of the data structure so that only one thread can touch it. That may stem the chaos, but only by killing most of the upside of having multiple threads working on the same data. You might as well rewrite it as a “single-threaded” program.

Closures

Somewhere along the line, someone decided it would be useful to pass around functions as if they were data. This worked well in simple instances, but programmers began to realize that problems arose when functions reached outside themselves and accessed other data, often called “free variables.” Which version was the right one? Was it the data when the function call was initiated? Or was it when the function actually runs? This is especially important for JavaScript where there can be long gaps in between.

The solution, the “closure,” is one of the biggest sources of headaches for JavaScript (and now Java and Swift) programmers. Newbies and even many veterans can’t figure out what’s being closed and where the boundaries of the so-called closure might be.

The name doesn’t help—it’s not like access is closed down permanently like a bar announcing last call. If anything, access is open but only through a wormhole in the data-time continuum, a strange time-shifting mechanism that is bound to eventually spawn a sci-fi TV show. But calling it a “Complex Stack Access Mechanism” or “Data Control Juggling System” seems too long, so we’re stuck with “closures.” Don’t get me started on whether anyone needs to pay for the nonfree variables.

Too big data

When RAM starts filling up, everything starts going wrong. It doesn’t matter if you’re performing newfangled statistical analysis of consumer data or working on a boring, old spreadsheet. When the machine runs out of RAM, it turns to so-called virtual memory that spills out into the superslow hard disk. It’s better than crashing completely or ending the job, but boy does everything slow down.

The problem is that hard disks are at least 20 or 30 times slower than RAM and the mass-market disk drives are often slower. If some other process is also trying to write or read from the disk, everything becomes dramatically worse because the drives can do only one thing at a time.

Activating the virtual memory exacerbates other, hidden problems with your software. If there are threading glitches, they start to break much faster because the threads that are stuck out in the hard disk virtual memory run so much slower than the other threads. That only lasts a brief period, though, because the once wallflower threads get swapped into memory and the other threads hang up. If the code is perfect, the result is merely much slower. If it’s not, the flaws quickly send it crashing into disaster. That’s one small example.

Managing this is a real challenge for programmers who are working with big piles of data. Anyone who gets a bit sloppy with building wasteful data structures ends up with code that slows to a crawl in production. It may work fine with a few test cases, but real loads send it spiraling into failure.

NP-complete

Anyone with a university education in computer science knows of the mysterious problems wrapped in an acronym that’s rarely spelled out: nondeterministic polynomial complete, aka NP-complete. The details often take an entire semester to learn, and even then, many CS students come out with a foggy notion that no one can solve these problems because they’re too hard.

The NP-complete problems often are quite difficult—if you attack them simply with brute force. The “traveling salesman problem,” for example, can take an exponentially long time as the sales route includes more and more cities. Solving a “knapsack problem” by finding a subset of numbers that come the closest to some value N are solved by trying all possible subsets, which is a very big number. Everyone runs with fear from these problems because they’re the perfect example of one of the biggest bogeymen in Silicon Valley: algorithms that won’t scale.

The tricky part is that some NP-complete problems are easy to solve with an approximation. The algorithms don’t promise the exact solution, but they come pretty close. They may not find the perfect route for the traveling salesman, but they can come within a few percentage points of the right answer.

The existence of these pretty good solutions only make the dragons more mysterious. No one can be sure if the problems are truly hard or easy enough if you’re willing to be satisfied by an answer that’s just good enough.

Security

“There are known knowns; there are things we know we know,” Donald Rumsfeld, the Secretary of Defense during the second Bush administration, once said at a press conference. “We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.”

Rumsfeld was talking about the war in Iraq, but the same holds true for computer security. The biggest problems are holes that we don’t even know are possible. Everyone understands that you should make your password hard to guess—that’s a known known. But who has ever been told that your networking hardware has its own software layer buried inside? The possibility that someone could skip hacking your OS and instead target this secret layer is an unknown unknown.

The possibility of that kind of hack may not be unknown to you now, but what if there are others? We have no clue if we can harden the holes we don’t even know exist. You can batten down the passwords, but there are cracks you can’t even imagine. That’s the fun of working with computer security. And when it comes to programming, security-minded thinking is becoming ever more important. You can’t leave it to the security pros to clean up your mess.

Encryption

Encryption sounds powerful and impenetrable when law enforcement officials get in front of Congress and ask for official loopholes to stop it. The problem is that most encryption is built on a foggy cloud of uncertainty. What mathematical proofs we have rest on uncertain assumptions, like it’s hard to factor really big numbers or compute a discrete log.

Are those problems truly hard? No one has publicly described any algorithms for breaking them, but that doesn’t mean the solutions don’t exist. If you found a way to eavesdrop on every conversation and break into any bank would you promptly tell the world and help them plug the holes? Or would you remain silent?

The real challenge is using encryption in our own code. Even if we trust that the basic algorithms are secure, there’s much work to be done juggling passwords, keys, and connections. If you make one mistake and leave a password unprotected, everything falls open.

Identity management

Everyone loves that New Yorker cartoon with the punchline, “On the internet, nobody knows you’re a dog.” It even has its own Wikipedia page with four elaborate sections.(On the internet, nobody knows the old saw about analyzing humor and dissecting frogs.)

The good news is that anonymity can be liberating and useful. The bad news is that we have no clue how to do anything but anonymous communications. Some programmers talk about “two-factor authentication,” but the smart ones jump to “N-factor authentication.”

After the password and maybe a text message to a cellphone, we don’t have much that’s very stable. Fingerprint readers look impressive, but plenty of people seem willing to divulge how they can be hacked (see here, here, and here for starters).

Not much of this matters to the world of idle chatter on Snapchat or Reddit, but the stream of hacked Facebook pages are a bit disconcerting. There’s no easy way to handle serious matters like property, money, health care, or pretty much everything else in life except meaningless small talk. The bitcoin fanbois love to prattle on about how rock-solid the blockchain may be, but somehow the coins keep getting ripped off (see here and here). We have no real method to handle identity.

Measuring hardness

Of course, when it comes to programming, is there even a way by which we can measure the difficulty of a problem? No one really knows. We do know that some problems are easy to solve, but it’s entirely different to certify one as hard. NP-completeness is only one part of an elaborate attempt to codify the complexity of algorithms and data analysis. The theory is helpful, but it can’t offer any guarantees. It’s tempting to say that it’s hard to even know whether a problem is hard, but well, you get the joke.

 

 

[Source:- JW]

Will the R language benefit from Microsoft acquisition?

Microsoft’s recent acquisition of Revolution Analytics represents a significant move on the company’s part. Revolution Analytics is built around the highly popular R language, an open source programming language designed specifically for statistical analytics.

In addition to the R language, Revolution Analytics offers two platforms for developing and deploying R-based applications, one of which is also open source and available free to the public. With this acquisition, Microsoft is clearly moving into new territory. The question that remains is whether the impact will be felt only within Microsoft, or across the R community at large.

The world of Revolution Analytics

Formed in 2007, Revolution Analytics set out to build and support the R community as well as meet the needs of a growing commercial base. Since then, Revolution Analytics has become the world’s largest provider of R-related software and services. That shouldn’t be surprising, given that R is the world’s most widely used programming language for statistical computing and predictive analytics.

Since its rise to fame, Revolution Analytics has continued to support the open source community, contributing regularly to projects such as RHadoopand ParallelR. The company also supports more than 150 R-based user groups across the globe. Revolution Analytics’ own open source product, Revolution R Open, provides a development platform for R-based applications that users can download for free and share with other users, making analytical software affordable to a wide range of individuals and organizations.

Yet Revolution Analytics has been just as busy on the commercial side with Revolution R Enterprise, a more sophisticated version of the open platform. With the enterprise edition, organizations can implement scaled-out options for exploring and modeling large sets of data. The enterprise edition uses parallel external memory algorithms to support large-scale predictive modeling, data statistics and machine-learning capabilities, delivered at breakneck speeds on multiple environments.

A closer look at R

Ross Ihaka and Robert Gentleman at the University of Auckland created the R language in 1993 to address the limitations of existing analytical solutions. In 1995, they released R to the open source community under the terms of the GNU General Public License established by the Free Software Foundation.

From there, the code quickly gained in popularity among analysts and those developing analytical applications. Organizations that have used R include Google, Facebook, Twitter, Nordstrom, Bank of America and The New York Times, to name a few. R set a new standard for analytics that delivered predictive modeling capabilities lacking in more traditional languages.

Because R was created by and for statisticians, it contains many of the features needed to accomplish common statistical-related tasks. For example, R includes data frames, a natural data structure available in few other languages. R also makes it easier to track unknown values within an application so the actual values can be easily inserted once they are known. In addition, R makes it easy to save, reuse and share new analytical techniques with other developers and data scientists.

The R language is particularly efficient at generating visualizations, such as charts and graphs, to identify patterns and hidden anomalies. The language is efficient at reading data, generating lines and points, and properly positioning them into meaningful representations, whether maps, plots (image, scatter, bar), 3-D surfaces or pie charts.

What’s in it for Microsoft?

According to Microsoft, the Revolution Analytics acquisition will help its customers more easily implement advanced analytics within Microsoft platforms, including on-premises, on Microsoft Azure or in hybrid cloud implementations.

To this end, Microsoft plans to build R and Revolution Analytics’ technologies into Microsoft data systems, including SQL Server. Developers and data scientists will be able to take advantage of in-database analytic capabilities that can be deployed across environments. Microsoft also plans to integrate R into Azure HDInsight and Azure Machine Learning, providing more options for creating analytical models that can aid in making data-driven decisions.

Microsoft’s deep pockets also will let the company invest in the R-based applications that Revolution Analytics brings to the table. In addition, Microsoft promises continued support of Revolution R Enterprise across multiple operating systems and heterogeneous platforms such as Hadoopand Teradata. In addition, Microsoft says it will also continue Revolution Analytics’ education and training efforts for developers and data scientists.

What is particularly interesting about Microsoft’s acquisition is its stated commitment to foster Revolution Analytics’ open source nature, not only for the R language, but also for other open source commitments, including Revolution R Open, ParallelR, RHadoop, DeployR and other notable projects.

Perhaps this isn’t such a leap. Microsoft was already an R proponent long before bringing Revolution Analytics on board, having used R to enhance Xbox online gaming and to apply machine learning to data coming from such sources as Bing, Azure and Office. And Microsoft already supports R extensively within the Azure framework.

Microsoft’s acquisition of Revolution Analytics is still new, and despite the promises coming out of both companies, we don’t know what R will look like once everything has played out. What we do know is that R is a popular and widely implemented technology used in a wide range of analytical applications. The marriage between Microsoft and Revolution Analytics could go a long way in solidifying R’s hold on analytics. And we all know how much Microsoft likes to maintain its hold over those industry pieces of the pie.

 

 

 

[Source:- techtarget]

Microsoft adds new chat-based service for workers

Microsoft adds new chat-based service for workers

Taking a cue from competing online services like Slack, which let workers chat and share information on the job, Microsoft is adding a new program called “Teams” to its Office 365 suite of internet productivity software.

Analysts say Microsoft is catching up to a trend in which a host of tech companies—even Facebook—are competing to offer specialized online networks for organizations, as workers increasingly find that email and simple document-sharing services are too limited for communicating and collaborating.

Like competing services, Microsoft’s new “Teams” product provides a central place online for workplace groups to chat, share files and perform other tasks. But unlike competitors, Microsoft is offering the ability to easily transition into other widely used Microsoft programs, such as Outlook for email and calendars and Skype for voice and video conferences. “Teams” can also incorporate artificially intelligent “bots” and other software programs created by outside developers.

Workplace software is a big business for Microsoft. While the giant tech company is best known for making the Windows operating system for PCs, it racked up more than $26.4 billion in revenue last year from workplace “productivity” programs like Office, which includes software for email, calendars, word-processing and other functions. Although other divisions bring in more revenue, Microsoft’s “productivity” division is its most lucrative, with $12.4 billion in operating profit.

But the company has been threatened by new offerings from big competitors like Google, as well as upstarts like Slack, which provide a central meeting place online where teams of workers can hold running conversations and share files that are easily accessible.

Microsoft bought the workplace social networking service Yammer for more than $1 billion in 2012 and will continue that service, which some companies use as an interactive bulletin board. Analysts say newer, competing services have more functions. And new companies like Slack have entered the market by making their services easily available to individual departments or groups.

But Microsoft has the advantage that its email and other programs are already widely used by companies, which could make it easier to add Teams. It’s also touting that Teams offers encryption and other security measures, along with the ability to integrate with software from outside developers.

“Yes they are late to the market, but they have recognized that and they have done a lot of work to circumvent that problem,” said Vanessa Thompson, an analyst with IDC.

[Source:- Phys.org]

Torrent Platinum Windows 10 app is totally free today

Image result for Torrent Platinum Windows 10 app is totally free today

The Windows 10 app, Torrent Platinum, is being given away for free for the next 24 hours in the Windows Store. The app works as a torrent client and functions on both Windows 10 and Windows 10 Mobile devices. Here’s the official app description:

Choose Torrent Platinum and download files comfortably! Torrent Platinum is a new easy to use and having a friendly interface the torrent client that will help you quickly and easily download different files (movies, music, TV shows, books and much more).

Torrent Platinum is: High speed download files; Modern design; Simple and intuitive interface.

You finally found the most convenient way to download content to your device! It now remains to click “Install” Torrent Platinum and enjoy!

 

[Source:- Winbeta]

Docker, machine learning are top tech trends for 2017

Augmented reality, Docker, machine intelligence keep building momentum

With 2017 fast approaching, technology trends that will keep gathering steam in the new year range from augmented and virtual reality to machine intelligence, Docker, and microservices, according to technology consulting firm ThoughtWorks.

In its semiannual Technology Radar report published Monday, ThoughtWorks calls out four IT themes growing in prominence:

  • Virtual reality (VR) and its cousin, augmented reality (AR)
  • Docker as process, PaaS as machine, microservices architecture as programming model
  • Intelligent empowerment
  • The holistic effect of team structure

The data is based on reports ThoughtWorks’ consultants are seeing out in the field.

ThoughtWorks sees natural language processing tools like Nuance Mix and hardware providing for natural interactions having a “huge” impact on AR and VR adoption. AR differs from VR in that users still can see the world around them rather than being completely immersed in a virtual space; of the two, AR is likely to be most interesting to businesses.

“One excellent application is remote expert systems,” said Mike Mason, technology activist at ThoughtWorks. “Those are the ones where, for example, a semiskilled, nonexpert worker can be wearing the headset and performing a particular task,” such as a nurse or factory worker getting instructions via the device from a remote expert, Mason said. Though he sees no single leader in AR, Mason calls out Microsoft’s HoloLens as a technology of note. Currently, there is a shortage of high-fidelity VR and AR skills outside of the gaming industry, so the average enterprise will need to acquire or rent them, Mason said.

For Docker, PaaS, and microservices, developers see containers as a self-contained process and the PaaS as the common deployment target, using microservices as the common style, according to ThoughtWorks. “What we’re seeing today is the level of abstraction is being raised up,” Mason said. In the previous paradigm, a process ran only on a machine. “Now, we think about a Docker image as that basic unit of work and computation,” and APIs and microservices serve as a communications fabric.

Intelligent empowerment, meanwhile, has companies frequently open-sourcing sophisticated libraries and tools that would have been “stratospherically expensive” and restricted a decade ago, ThoughtWorks said. New tooling has been made possible by commodity computing and targeting of specific hardware like GPUs and clouds.

Mason noted that “machines and humans [work] together to produce greater outcomes than either one working alone. This was not the dystopian future where everybody’s out of a job because the machines replace us.” For example, computers could digest large quantities of data to suggest potential cancer treatment plans. “The important thing here is it was not replacing what the doctors were going to do.”

For software development team structures, tech companies are popularizing the “you build it, you run it” style of team autonomy, ThoughtWorks said. When restructuring a team produces better results, it shows that software development remains mostly a communication problem.

“There’s lots of IT departments that are stuck in the old days of silos,” Mason said. “Businesses are frustrated that they can’t move at the speed of Silicon Valley startups.” But Silicon Valley executives are now joining traditional enterprises and rebuilding their IT departments in the image of Silicon Valley, he said.

While devops is a factor, this movement is about self-service infrastructures and having teams that are small enough to achieve this and that include business stakeholders to get software into production and provide the right business benefits.

 
[Source:- Javaworld]

 

Vendors introduce three new SQL Server appliances

Since the release of SQL Server 2014, a number of vendors have announced plans to deliver appliances built around SQL Server 2014 and other data-related technologies. Each appliance provides preconfigured hardware and software aimed at supporting high-performing, data-driven applications.

Several of the appliances are based on the Microsoft Analytics Platform System (APS), a data warehousing platform that integrates structured and non-structured data. Three companies currently offer an APS appliance: Dell, Quanta and HP.

Dell is also teaming up with Fusion-io to deliver the Acceleration Appliance for Databases, a flash-based, platform-agnostic product optimized for enterprise applications such as SQL Server 2014. NEC and HGST have joined forces to offer another flashed-based appliance, the PCIe SSD Appliance for Microsoft SQL Server, which is aimed squarely at SQL Server data-related workloads. Let’s look at what these emerging appliances have to offer.

Microsoft Analytics Platform System

When Microsoft released the first appliance update for version 2 of SQL Server Parallel Data Warehouse, it changed the name of the PDW appliance to the Analytics Platform System, although that version of SQL Server appears to have retained the PDW moniker, at least informally.

Despite the confusing labeling, SQL Server PDW continues to support a massively parallel processing (MPP) architecture that distributes and parallelizes computing operations across multiple physical nodes. The MPP technology maximizes query performance within the newly dubbed APS appliance and supports high levels of query complexity and concurrence. In addition, the appliance takes advantage of SQL Server’s in-memory columnstore features, which can improve query performance even further.

To support the addition of unstructured data, the appliance includes Microsoft’s HDInsight, a Hadoop distribution based on the Hortonworks Data Platform. Hadoop is a software framework for managing and analyzing large sets of unstructured data on commodity hardware. The APS appliance also comes with PolyBase, a tool that enables T-SQL queries to access both PDW databases and the HDInsight data platform.

The APS appliance is built on Windows Server 2012. The operating system provides directory management through Active Directory, virtualization through Hyper-V, and high availability through Failover Clustering and Clustered Shared Volumes, or CSVs. In addition, SQL Server can take advantage of Windows Storage Spaces, which provide virtual drives for pooling storage resources.

Also included with the APS appliance is the hardware necessary to host the software and its data. The hardware is made up of a set of commodity servers, drives, storage devices and networking components that can be scaled out to support up to 6 petabytes of raw storage.

Not surprisingly, vendors selling the appliances outfit them with their own hardware. The Dell appliance is based on Dell’s PowerEdge R60 servers, Quanta uses its Quad-enclosure servers and HP uses HP ProLiant GenB servers. The appliances are delivered fully preconfigured and tested, so they’re ready to deploy when they arrive.

NEC PCIe SSD Appliance for Microsoft SQL Server

The NEC PCIe SSD Appliance for Microsoft SQL Server incorporates the power of the NEC Express 5800 scalable enterprise server series to support large-scale online transaction processing (OLTP) and business intelligence (BI) operations. The Express 5800 server has a 4U form factor (four rack units in size) and supports up to four Intel Xeon E7 processors, for a total of 24 physical cores. The server also comes with 16 available PCI-Express 3.0 I/O slots and 64 available DDR3 memory slots.

But the servers aren’t solely responsible for delivering high-performing data processing. Included with the appliance is HGST FlashMAX II PCIe server-mounted flash storage. The FlashMAX II is a multi-level cell flash unit, which means each memory unit can store more than a single bit of information. The unit also incorporates a hardware RAID mechanism optimized for flash memory. With the FlashMAX II devices, a server can hold up to 8.8 terabytes (TB) of flash storage and deliver logical scan rates of 8.2 GBps.

The NEC appliances are built according to best practice configurations, as outlined in the SQL Server Fast Track Data Warehouse (FTDW) reference architecture. The FTDW defines a core-balanced architecture that maximizes SQL Server data processing against component hardware throughput. The NEC appliance uses the FTDW configurations to balance the CPU cores against the I/O channels and storage sequential I/O capacities.

Dell Acceleration Appliance for Databases

NEC isn’t the only vendor introducing flash-based appliances. Dell’s Acceleration Appliance for Databases incorporates flash storage technology from Fusion-io. What sets Dell apart from NEC is that the Dell appliance is not built around a specific platform. Rather, its focus is on the “enterprise application,” which can include a wide range of database products, including MySQL, Sybase, Oracle Database, SAP HANA, MongoDB, Apache Cassandra and, of course, SQL Server 2014.

The Dell appliance uses the Dell PowerEdge R720, a 2U rack server that can support up to 12 TB of flash storage, 40 GB of bandwidth and 2.5 million input/output operations per second (IOPS). The flash storage is provided through Fusion ioMemory devices and uses Adaptive Flashback to protect data. Unlike many flash storage devices, which rely on RAID for their storage configurations, Adaptive Flashback handles failures at the block level, rather than device, simplifying management and causing less disruption to business operations should failure occur.

The PowerEdge servers, when combined with the Fusion-io flash storage, can speed up the performance of data-driven operations significantly, while reducing latency and I/O bottlenecks. The appliance is available as a standalone or high-availability product and provides a choice of networking options, including fibre channel and Infiniband.

SQL Server 2014 appliances

The APS appliance is clearly aimed at big data operations that can support petabytes of data. The appliance specifically targets BI and data analytics by providing the MPP architecture, which can distribute and parallelize processing-intense computing operations, and by incorporating unstructured data into its architecture.

The same goes for the Dell Acceleration Appliance for Databases. The flash storage can make some operations unbelievably fast, but it’s not built to handle petabytes of data. In addition, the appliance is not specifically fine-tuned according to FTDW best practices. Instead, the focus is on handling different data systems efficiently, which can translate to increased flexibility in the long term.

If SQL Server appliances are on your radar, you now have several options for handling your data workloads. Keep in mind, though, that it’s a changing market, so more could be coming at any time. And be sure to do your homework if you’re considering a SQL Server appliance. They have many benefits, but come with hefty price tags, and you certainly don’t want to purchase one only to discover it’s something that doesn’t fit your needs a few months down the road.

 

[Source:- techtarget]

Fujitsu leverages AI to develop highly accurate recognition technology for strings of handwritten Chinese characters

Fujitsu leverages AI to develop highly accurate recognition technology for strings of handwritten Chinese characters

Fujitsu today announced the development of an artificial intelligence model that can generate highly reliable recognition of handwritten character strings. The results of this model represent the world’s highest degree of accuracy in recognizing handwritten Chinese character strings. Recognition of individual handwritten Chinese characters using deep learning and other AI models has already surpassed human recognition capability. When used on strings of handwritten characters, however, issues arise with an inability to correctly break the strings into individual characters. Given this, the new Fujitsu-developed AI model can rank degree of reliability, assigning a high degree of reliability to correct characters, and a low degree of reliability to portions that are not characters, in image recognition for handwritten strings of characters. By applying this model, recognition mistakes in characters have been reduced to less than half that of previous technology, greatly improving the efficiency of tasks such as digitization of handwritten texts.

This technology will be used as part of Human Centric AI Zinrai, the Fujitsu AI technology. Details of this technology were announced at the 15th International Conference on Frontiers in Handwriting Recognition (ICFHR-2016), held on October 24 in China.

Development Background

Character recognition is a field where the utilization of AI promises greater task efficiency. Fujitsu Laboratories has several decades of experience in research and development relating to character recognition, and has a large portfolio of technologies, such as machine translation, in the field of Japanese language processing. In September 2015, using AI technologies modeled on the workings of the human brain, Fujitsu announced its successful demonstration of the world’s first technology with a character recognition rate that exceeded that of a human to recognize individual handwritten Chinese characters. However, Chinese sentences are made up of strings of complex Chinese characters and when an individual character is not clearly distinguishable, such as in a handwritten form, it is difficult to recognize a character accurately.

Issues

Such technologies using AI start off with a supervised sample of characters to enable the system to learn and remember features of multiple character patterns used by humans when recognizing characters. Next, an image of a string of characters would be divided into parts, and by determining the blank spaces would separate the radicals (the components that make up a Chinese character) and have situations where the separated areas would display a single region (top row of Figure 1), and situations when parts from neighboring characters become a region (bottom row of Figure 1). The program then assumes each region represents an individual character, and outputs the candidate character recognition result and its degree of reliability, using a recognition algorithm based on its earlier learning. The closer the degree of reliability is to one, the higher the program’s reliability is of the candidate character. It finally outputs its recognition results by selecting in order the combination that has the highest average degree of reliability (bottom of Figure 1). With the previous technology, however, there were times when the system would output a high degree of reliability for images that were not characters, such as the component radicals, creating an issue where the system could not correctly separate characters.

Fujitsu leverages AI to develop highly accurate recognition technology for strings of handwritten Chinese characters
Figure 2: Training and recognition processing with the heterogeneous structure deep learning model. Credit: Fujitsu

About the Technology

This Fujitsu-developed technology generates a high level of reliability only for proper characters. It does this by using a heterogeneous deep learning model, which, in addition to supervised character samples used in conventional technology, uses a newly developed supervised sample of non-characters made up of radicals, and combinations of parts which do not make up characters. Technology features are as follows.

1. Effective learning technology with heterogeneous deep learning, including non-characters

In a heterogeneous deep learning model, two types of supervised samples are used: one for existing characters, and another for non-characters. Compared with the supervised character sample, the supervised non-character sample achieved a huge number by dividing up characters and recombining them. Therefore, by having the system remember the features of non-characters that can easily appear in combinations of neighboring parts in Chinese sentences, Fujitsu developed technology that can effectively learn, even with an asymmetrical deep learning model (Figure 2a).

2. Technology to correctly break down handwritten character strings based on degree of reliability

Figure 3: Recognition results for a string of characters with the heterogeneous structure deep learning model. Credit: Fujitsu

By inputting images of candidate areas into the trained heterogeneous deep learning model, and creating a system that outputs a degree of reliability for both characters and non-characters, high for candidate areas which form characters and low for candidate areas which do not, Fujitsu developed a technology that effectively separates a string of characters into individual characters (Figure 2b). An existing Chinese language processing model is then applied, and based on an analysis of whether the recognition candidates form a string of correct Chinese, the final candidate sentence is output. Because the level of reliability for combinations of parts which do not form existing characters is lower than the level of reliability toward actual characters, by applying this recognition technology, correct recognition results can be achieved by selecting the segment path with the highest degree of reliability, beginning with the start of the string of characters (Figure 3).

Effects

When this technology was benchmarked against a database of handwritten Chinese released in 2010 by the Institute of Automation, Chinese Academy of Sciences (CASIA), which is used as a standard by academic societies, it achieved recognition accuracy of 96.3%, the highest achieved to date, surpassing previous technologies by 5%. As a result, this technology can greatly improve the efficiency of inputting handwritten text.

This technology is effective for languages that have no spacing between words, including Chinese, Japanese, and Korean. It is expected that the recognition accuracy of free-form handwritten text in Japanese will significantly improve by bringing this technology together with Fujitsu Laboratories’ long-accumulated track record of language processing technology for Japanese. Fujitsu will aim to bring this technology to Zinrai in 2017, Fujitsu’s AI technology platform, and apply it in stages toward a handwritten digital ledger system for Japan and other solutions.

[Source:- Phys.org]

Bing Predicts missed badly on the U.S. elections, and here’s Microsoft’s response

Image result for Bing Predicts missed badly on the U.S. elections, and here’s Microsoft’s response

As most of the world knows by now, Republican nominee Donald Trump won the Presidential Elections of 2016 this morning against Democratic Hillary Clinton. The sudden sweep of red states was an unexpected game changer in the election and almost everybody missed the mark, even Bing.

Bing Predicts is known for calculating the outcomes of popular global events such as sports championships, entertainment awards, and legislative elections. In this particular case, Bing saw fit to grant Clinton the win with a comfortable margin. We even saw Bing Predicts give as high as 90% probability in the lady’s favor. But as the night went on, it dwindled to leave Trump with the majority of electoral votes and the presidency on the horizon.

WinBeta reached out to Microsoft for an answer to exactly how Bing was so far off the outcome. A spokesperson replied:

“Bing Predicts uses several sources including search, web, social data, third-parties and more to inform predictions. True to the nature of all predictions, we can never guarantee 100% accuracy.”

Bing wasn’t the only prediction to guess wrongly. In fact, most major news and analysist predictions were shocked to see just how wrong they were when the results came in from yesterday’s polls.

What does this mean for Bing Predicts? Probably nothing. Simply put, the system can only calculate with data that it is given like we explained here. After all, computers are only as intelligent as mankind creates them to be.

 

 

[Source:- Winbeta]