ANTIVIRUS FOR ANDROID HAS A LONG, LONG WAY TO GO

ANTIVIRUS PROGRAMS ON PCs have a mixed track record. While generally useful, they still have to play catch-up with evolving threats–and their deep system access has on occasion enabled even worse attacks. Now, as antivirus products gain in popularity for Android devices, they appear to be making many of the same old mistakes.

A key part of the current shortcomings stems from relative immaturity in Android antivirus offerings. Researchers at Georgia Tech who analyzed 58 mainstream options found that many were relatively easy to defeat, often because didn’t take a nuanced and diverse approach to malware detection. Taking on the mindset of an attacker, the researchers built a tool called AVPass that works to smuggle malware into a system without being detected by antivirus. Of the 58 programs AVPass tested, only two–from AhnLab and WhiteArmor–consistently stopped AVPass attacks.

“Antivirus for the mobile platform is really just starting for some companies—a lot of the antivirus for Android may even be their first iteration,” says Max Wolotsky, a PhD student at Georgia Tech who worked on the research. “We would definitely warn consumers that they should look into more than just AV. You want to be cautious.”

Modern antivirus uses machine-learning techniques to evolve with the malware field. So in creating AVPass, the researchers started by developing methods for defeating defensive algorithms they could access (like those created for academic research or other open-source projects) and then used these strategies as the basis for working out attacks against proprietary consumer antivirus—products where you can’t see the code powering them. The team will present on and release AVPass at the Black Hat hacking conference in Las Vegas on Thursday.

Free Pass

To test the 58 Android antivirus products and figure out what bypasses would work against each of them, the researchers used a service called VirusTotal, which attempts to identify links and malware samples by scanning them through a system that incorporates dozens of tools, and offering results about what each tool found. By querying VirusTotal with different malware components and seeing which tools flagged which samples, the researchers were able to form a picture of the type of detection features each antivirus has. Under an academic license, VirusTotal limited the group to fewer than 300 queries per malware sample, but the researchers say even this small number was adequate for gathering data on how the different services go about detecting malware.

Before this reconnaissance, the team developed a feature for AVPass called Imitation Mode, which shields the test samples submitted for antivirus scanning so the snippets themselves wouldn’t be identified and blacklisted. “The Imitation Mode is for our malware obfuscation,” says Chanil Jeon, another researcher who worked on the project. “We extract particular malware features and insert them into an empty app, so we can test which feature or which combination is important for malware detection.” The team worked with mainstream malware samples from malware libraries like VirusShare.com and DREBIN.

AVPass is an open source prototype, part of broader Georgia Tech research into machine-learning algorithms (like those used in antivirus) and the extent to which they can be manipulated and exploited. But it also serves as commentary on the evolving landscape of mobile defense.

Room To Grow

If there’s a silver lining here, it’s that Android antivirus tools have an easier job than their PC equivalents, at least for now. “Android malware is not much of malware at all compared to PC malware,” says Mohammad Mannan, a security researcher at Concordia University in Montreal who has studied antivirus vulnerabilities. “They are just rogue apps in most cases, so they are far easier to detect.” And Mannan notes that though Android antivirus apps have a lot of leeway in the system, they aren’t as privileged as antivirus apps on PCs, which could potentially cut down on concerns that antivirus can sometimes be exploited as a security vulnerability in itself. “Mobile AVs run like a privileged app, but are still just an app in the end, not part of the operating system or kernel,” he says.

For now, though, the potential advantages seem overshadowed by the immaturity of the market. The AVPass team says that Android antivirus developers need to build out their products so the programs are looking for multiple malicious attributes at once. It’s much easier to sneak past one security guard than 10. And they note that their research would have been much more difficult and time-consuming if tools like VirusTotal were less specific in the information they disclose about each service.

“These results aren’t the most surprising,” Wolotsky says. “We knew going into this as security researchers that the mobile domain is much less advanced. We hope AVPass will give [antivirus developers] a way to see what works and what doesn’t, because I’m not sure they’ve had that.”

[“Source-wired”]

ANTIVIRUS FOR ANDROID HAS A LONG, LONG WAY TO GO

ANTIVIRUS PROGRAMS ON PCs have a mixed track record. While generally useful, they still have to play catch-up with evolving threats–and their deep system access has on occasion enabled even worse attacks. Now, as antivirus products gain in popularity for Android devices, they appear to be making many of the same old mistakes.

A key part of the current shortcomings stems from relative immaturity in Android antivirus offerings. Researchers at Georgia Tech who analyzed 58 mainstream options found that many were relatively easy to defeat, often because didn’t take a nuanced and diverse approach to malware detection. Taking on the mindset of an attacker, the researchers built a tool called AVPass that works to smuggle malware into a system without being detected by antivirus. Of the 58 programs AVPass tested, only two–from AhnLab and WhiteArmor–consistently stopped AVPass attacks.

“Antivirus for the mobile platform is really just starting for some companies—a lot of the antivirus for Android may even be their first iteration,” says Max Wolotsky, a PhD student at Georgia Tech who worked on the research. “We would definitely warn consumers that they should look into more than just AV. You want to be cautious.”

Modern antivirus uses machine-learning techniques to evolve with the malware field. So in creating AVPass, the researchers started by developing methods for defeating defensive algorithms they could access (like those created for academic research or other open-source projects) and then used these strategies as the basis for working out attacks against proprietary consumer antivirus—products where you can’t see the code powering them. The team will present on and release AVPass at the Black Hat hacking conference in Las Vegas on Thursday.

Free Pass

To test the 58 Android antivirus products and figure out what bypasses would work against each of them, the researchers used a service called VirusTotal, which attempts to identify links and malware samples by scanning them through a system that incorporates dozens of tools, and offering results about what each tool found. By querying VirusTotal with different malware components and seeing which tools flagged which samples, the researchers were able to form a picture of the type of detection features each antivirus has. Under an academic license, VirusTotal limited the group to fewer than 300 queries per malware sample, but the researchers say even this small number was adequate for gathering data on how the different services go about detecting malware.

Before this reconnaissance, the team developed a feature for AVPass called Imitation Mode, which shields the test samples submitted for antivirus scanning so the snippets themselves wouldn’t be identified and blacklisted. “The Imitation Mode is for our malware obfuscation,” says Chanil Jeon, another researcher who worked on the project. “We extract particular malware features and insert them into an empty app, so we can test which feature or which combination is important for malware detection.” The team worked with mainstream malware samples from malware libraries like VirusShare.com and DREBIN.

AVPass is an open source prototype, part of broader Georgia Tech research into machine-learning algorithms (like those used in antivirus) and the extent to which they can be manipulated and exploited. But it also serves as commentary on the evolving landscape of mobile defense.

Room To Grow

If there’s a silver lining here, it’s that Android antivirus tools have an easier job than their PC equivalents, at least for now. “Android malware is not much of malware at all compared to PC malware,” says Mohammad Mannan, a security researcher at Concordia University in Montreal who has studied antivirus vulnerabilities. “They are just rogue apps in most cases, so they are far easier to detect.” And Mannan notes that though Android antivirus apps have a lot of leeway in the system, they aren’t as privileged as antivirus apps on PCs, which could potentially cut down on concerns that antivirus can sometimes be exploited as a security vulnerability in itself. “Mobile AVs run like a privileged app, but are still just an app in the end, not part of the operating system or kernel,” he says.

For now, though, the potential advantages seem overshadowed by the immaturity of the market. The AVPass team says that Android antivirus developers need to build out their products so the programs are looking for multiple malicious attributes at once. It’s much easier to sneak past one security guard than 10. And they note that their research would have been much more difficult and time-consuming if tools like VirusTotal were less specific in the information they disclose about each service.

“These results aren’t the most surprising,” Wolotsky says. “We knew going into this as security researchers that the mobile domain is much less advanced. We hope AVPass will give [antivirus developers] a way to see what works and what doesn’t, because I’m not sure they’ve had that.”

[Source:-Wired]

How Technology Can Help You Engage Your Audience the Right Way

Shutterstock

If you’re looking for a scapegoat for just about any of the world’s issues, you probably know technology makes a good choice. I can’t tell you how many times I’ve heard people talk about how technology and being “plugged in” is making relationships harder than ever.

For some, I’m sure that’s probably true. At the end of the day, though, technology is a tool, and your relationships with other people — including your audience — depend on how you use it.

For marketers, technology presents an opportunity for you to reach and connect with your audience. Content marketing tools, for example, help you plan and craft your brand’s most engaging messages. Social media tools help you get them into the hands of the right people. Marketing automation platforms help you streamline and automate your processes, among other things.

The only catch? You can’t entirely remove the human element from the equation and let technology do it all.

Learn the Golden Equation: Technology + authenticity = engagement

If you had your choice between an engaging, personalized message from an authentic thought leader at a company and boring, automated content coming from an impersonal corporate logo, which would you prefer? It’s no contest: We’d all choose personalized content from real humans.

Marketers can use technology to create that content, deliver it, measure their efforts — any number of things. But tech, as ever-present as it is, won’t magically result in audience engagement and stronger relationships. Like I said, it’s a tool that needs to be used to make your job of connecting with your audience easier than before.

Sadly, too many brands forget their role in building those relationships and overlook the human elements that are necessary to make their messages resonate. They then wonder why engagement is low, assuming technology has created this huge trust barrier and made it harder to connect instead of looking in the mirror to find the root of the problem: They haven’t humanized their brands or used the right content to communicate that.

Make the shift from me to you

Talking “at” versus talking “with”: It’s a big distinction. Too many companies are knee-deep in the former, pushing out information like that boorish uncle at your folks’ annual Fourth of July picnic who simultaneously says everything and nothing.

In the past, brands would develop an idea or a message and push it out for everyone and their mother to see, whether those recipients truly cared to see it or not. In my business and marketing book, “Top of Mind,” I call this “Me Marketing,” where marketers only push out what they want and focus on themselves in the process. (I’ve yet to meet one person who truly enjoys getting spammed with a ton of promotional emails that were clearly sent out en masse with no personalization at all.)

Today, effective brands and marketers are taking a different approach. They have shifted to what I call “You Marketing” and have begun creating content for the actual audience members receiving it.

There’s a much greater focus on what audiences want and how they like to receive information, engage with content, and work with brands. Marketers need to listen to and authentically engage with audiences, and they need to do it on that audience’s terms. Technology can help.

Pursue new technology for better relationships

One example of a tool that’s taking the modern customer experience and running with it is PingPilot. Launched by SCORCH, this software aims to change the conversation between businesses and individuals by allowing people to choose their preferred means of communication. The means of conversation can change depending on the client’s needs — live chat, voice, and SMS are all viable channels. Essentially, businesses move over and give consumers the keys to the car, as well as the wheel.

Over time, this allows brands and consumers to forge sincere bonds based on trust and live interactions, not chatbots or automated replies. Each touchpoint becomes an opportunity to build a better understanding of customers; data from these interactions can improve the company’s marketing stack and explode lead generation, not to mention conversions.

This is a prime example of how technology actually helps build stronger personal relationships and connections, not replace them.

Everyone loves to hate something, but it’s time to pull back from blaming technology left and right. Instead of cursing a technology-rich world that’s made Snapchat filters and hashtags so ubiquitous you hardly notice them anymore, it’s wiser to look deeper into what those selfies and hashtags mean to the people who make, view, and engage with them. Authenticity between brands and audiences has technology at its core, but it takes human hands, minds, and hearts to execute it.

John Hall is the CEO of Influence & Co., a keynote speaker, and the author of “Top of Mind.” You can book John to speak here.

[“Source-forbes”]

Nipun “Javatinii” Java Clicks His Way to 2nd Bracelet of the Summer in $1,000 WSOP.com Event

Nipun Java

SHARELINES

It didn’t take Nipun “Javatinii” Java days to win a World Series of Poker gold bracelet. No, it only took them a little more than half a day. Java took on a field of 951 players and emerged victorious after just 14 and a half hours. For his victory, he won $237,668 as well as his second World Series of Poker gold bracelet of the summer after coming out on top in the $1,000 Tag Team Event back in June together with Aditya Sushant.

It was with 20 players left that Java first made it onto the radar. He picked off a bluff from “topkoks”who was running extremely hot at the time, eliminating three players in the span of 20 minutes. Java made a call with just top pair to pick off “topkoks” all-in bluff shove on the river to become the chip leader with 20 left. From there it was easy riding for Java to make the final table.

Java exchanged the chip lead several times with “WhyTry” who ended up finishing in 11th place for $13,211. In one hand, Java flopped a full house and was able to get tons of value out of “WhyTry” to put a stranglehold on the top position. Java then eliminated “AznBlazer469” in 10th place to come to the final table in second place just behind Jason “jadedjason” James who eventually finished 2nd.

Java stayed mostly quiet for the final table, picking good spots and waiting to have the goods to get it in. He eliminated Sean “Hurricane27” Legendre in 7th place in a cooler spot where Legendre had kings and “Javatinii” was holding aces. Again, during four-handed play, Java called a shove from Evan “YUDUUUUUUUUU” Jarvis, when Java was holding kings and Jarvis was only holding nine-ten off. Java held up and took the chip lead with just three players left.

Heads-up play lasted for a while with James doubling up first by winning a coin flip with pocket eights against Java’s ace-ten, but Java doubled right back with ace five against James’ pocket queens after an ace hit the turn. The end of it all came when Java flopped two pair against James’s flush and straight draws. The turn and river bricked out and Nipun “Javatinii” Java was able to take down the tournament and the $237,668 first prize.

Place Player Name Country Prize (USD)
1 Nipun “Javatinii” Java United States $237,668
2 Jason “jadedjason” James Canada $146,202
3 Richard “jklolz” Tuhrin United States $103,326
4 Evan “YUDUUUUUUUUU” Jarvis Canada $73,911
5 Vinny “Mr_Sinister” Pahuja United States $53,595
6 Jeffrey “Jeffrey27rj” Turton United States $39,510
7 Sean “Hurricane27” Legendre United States $29,415
8 Steven “meditations” Enstine United States $22,185
9 Stanley “stanman420” Lee United States $17,075

That wraps up Event #71: $1,000 WSOP.com Online No-Limit Hold’em but Saturday marks the beginning of Event #73: $10,000 No-Limit Hold’em Main Event – World Championship, and PokerNews will be here with coverage of it all, so don’t miss a thing.

WSOP Online Event

Be sure to complete your PokerNews experience by checking out an overview of our mobile and tablet apps here. Stay on top of the poker world from your phone with our mobile iOS and Android app, or fire up our iPad app on your tablet. You can also update your own chip counts from poker tournaments around the world with MyStack on both Android and iOS.

[“Source-pokernews”]

IBM: Next 5 years AI, IoT and nanotech will literally change the way we see the world

artificial intelligence AI machine learning brain circuit

Perhaps the coolest thing about IBM’s 9th “Five Innovations that will Help Change our Lives within Five Years” predictions is that none of them sound like science fiction.

“With advances in artificial intelligence and nanotechnology, we aim to invent a new generation of scientific instruments that will make the complex invisible systems in our world today visible over the next five years,” said Dario Gil, vice president of science & solutions at IBM Research in a statement.

Among the five areas IBM sees as being key in the next five years include artificial intelligence, hyperimaging and small sensors. Specifically, according to IBM:

1. In five years, what we say and write will be used as indicators of our mental health and physical wellbeing. Patterns in our speech and writing analyzed by new cognitive systems will provide tell-tale signs of early-stage mental and neurological diseases that can help doctors and patients better predict, monitor and track these diseases. At IBM, scientists are using transcripts and audio inputs from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania and depression.

Today, it only takes about 300 words to help clinicians predict the probability of psychosis in a user. Cognitive computers can analyze a patient’s speech or written words to look for tell-tale indicators found in language, including meaning, syntax and intonation. Combing the results of these measurements with those from wearables devices and imaging systems (MRIs and EEGs) can paint a more complete picture of the individual for health professionals to better identify, understand and treat the underlying disease.

2. In five years, new imaging devices using hyperimaging technology and AI will help us see broadly beyond the domain of visible light by combining multiple bands of the electromagnetic spectrum to reveal valuable insights or potential dangers that would otherwise be unknown or hidden from view. Most importantly, these devices will be portable, affordable and accessible, so superhero vision can be part of our everyday experiences.

A view of the invisible or vaguely visible physical phenomena all around us could help make road and traffic conditions clearer for drivers and self-driving cars. For example, using millimeter wave imaging, a camera and other sensors, hyperimaging technology could help a car see through fog or rain, detect hazardous and hard-to-see road conditions such as black ice, or tell us if there is some object up ahead and its distance and size. Embedded in our phones, these same technologies could take images of our food to show its nutritional value or whether it’s safe to eat. A hyperimage of a pharmaceutical drug or a bank check could tell us what’s fraudulent and what’s not.

3. In the next five years, new medical labs on a chip will serve as nanotechnology health detectives– tracing invisible clues in our bodily fluids and letting us know immediately if we have reason to see a doctor. The goal is to shrink down to a single silicon chip all of the processes necessary to analyze a disease that would normally be carried out in a full-scale biochemistry lab.

The lab-on-a-chip technology could ultimately be packaged in a convenient handheld device to let people quickly and regularly measure the presence of biomarkers found in small amounts of bodily fluids, sending this information streaming into the cloud from the convenience of their home. There it could be combined with data from other IoT-enabled devices, like sleep monitors and smart watches, and analyzed by AI systems for insights. When taken together, this data set will give us an in-depth view of our health and alert us to the first signs of trouble, helping to stop disease before it progresses.

4. In five years, new, affordable sensing technologies deployed near natural gas extraction wells, around storage facilities, and along distribution pipelines will enable the industry to pinpoint invisible leaks in real-time. Networks of IoT sensors wirelessly connected to the cloud will provide continuous monitoring of the vast natural gas infrastructure, allowing leaks to be found in a matter of minutes instead of weeks, reducing pollution and waste and the likelihood of catastrophic events. Scientists at IBM are working with natural gas producers such as Southwestern Energy to explore the development of an intelligent methane monitoring system and as part of the ARPA-E Methane Observation Networks with Innovative Technology to Obtain Reductions (MONITOR) program.

5. In five years, we will use machine-learning algorithms and software to help us organize the information about the physical world to help bring the vast and complex data gathered by billions of devices within the range of our vision and understanding. We call this a “macroscope” – but unlike the microscope to see the very small, or the telescope that can see far away, it is a system of software and algorithms to bring all of Earth’s complex data together to analyze it for meaning.

By aggregating, organizing and analyzing data on climate, soil conditions, water levels and their relationship to irrigation practices, for example, a new generation of farmers will have insights that help them determine the right crop choices, where to plant them and how to produce optimal yields while conserving precious water supplies. Beyond our own planet, macroscope technologies could handle, for example, the complicated indexing and correlation of various layers and volumes of data collected by telescopes to predict asteroid collisions with one another and learn more about their composition.

 

IBM has had some success with its “five in five” predictions in the past. For example, in 2012 it predicted computers will have a sense of smell. IBM says “sniffing” technology is already in use at the Metropolitan Museum of Art, working to preserve and protect priceless works of art by monitoring fluctuations in temperature, relative humidity, and other environmental conditions. “And this same technology is also being used in the agricultural industry to monitor soil conditions, allowing farmers to better schedule irrigation and fertilization schedules, saving water and improving crop yield,” IBM said.

 

In 2009 it had an expectation that buildings will sense and respond like living organisms. IBM said it is working with The U.S. General Services Administration (GSA) to develop and install advanced smart building technology in 50 of the federal government’s highest energy-consuming buildings. “Part of GSA’s larger smart building strategy, this initiative connects building management systems to a central cloud-based platform, improving efficiency and saving up to $15 million in taxpayer dollars annually. IBM is also helping the second largest school district in the U.S. become one of the greenest and most sustainable by making energy conservation and cost savings as easy as sending a text message,” IBM stated.

 

 

[Source:- Javaworld]

 

An easier way to set up SQL Server on an Azure virtual machine

Image result for An easier way to set up SQL Server on an Azure virtual machine

setup procedure will allow users to configure SQL Server on Microsoft Azure without the aid of a database administrator.

“The new wizard for building and configuring a new virtual machine with SQL Server 2014 is very well put together,” said Denny Cherry, founder and principal consultant for Denny Cherry and Associates Consulting. “It helps solve a lot of the complexity of building a new SQL Server, specifically around how you need to configure the storage in order to get a high-performing SQL Server VM.”

Joseph D’Antoni, principal consultant at Denny Cherry and Associates Consulting, said that one of the major challenges with Azure was allocating storage. For instance, he said, to configure SQL Server on an Azure VM, you needed to allocate disks manually to get the needed amount of IOPS. This meant you had to know exactly what your storage application needs were for optimal performance, and many people were “kind of guessing,” D’Antoni said. With the new wizard, all you have to do is enter the required number of IOPS and storage is allocated automatically.

Automating SQL Server setup for an Azure VM means that no longer does everything have to be done manually: Connectivity, performance, security and storage are configured automatically during setup. “I think it does simplify what was a pretty complex process,” D’Antoni said.

You can now use the Internet to set up SQL Server connectivity and enable SQL Server authentication through the Azure Web portal. Previously, connecting SQL Server to an Azure VM via the Internet required a multistep process using SQL Server Management Studio. The new automated configuration process lets you pick whether to expand connectivity to the whole Azure virtual network or to connect only within the individual VM.

The new process for configuring SQL Server on an Azure virtual machine also includes automated patching and automated backup. The automated patching allows you pick a time when you want all your patches to occur. Users can schedule patches to minimize the impact they’ll have on the workload.  Automated backup allows you to specify how long to keep backups.

“I think that these are a great enhancement on the old process of having to know how to configure these components manually within the VM,” Cherry said, “because these configurations can get tricky to configure.”

D’Antoni added that this innovation is going to affect smaller organizations the most, because it means that they won’t need an expert to move SQL Server onto an Azure virtual machine. “[The simplified configuration] gives the power to someone who is deploying a VM when they would have needed an administrator or a DBA before. To that extent, it’s kind of a big deal.”

 
[Source:- searchsqlserver]

 

New Red Hat project looks a lot like a Docker fork

New Red Hat project looks a lot like a Docker fork

There have been rumblings about a possible split in the Docker ecosystem. Now Red Hat has unveiled a project that may not be pitched as a Docker fork, but sure has the makings of one.

The OCID project uses many Docker pieces to create a runtime for containers that can be embedded directly into the Kubernetes container orchestration system.

That version of the Docker runtime, Red Hat says, has been built for those who “need a simple, stable environment for running production applications in containers” — a broad hint that Docker’s “move fast and break (some) things” philosophy of product development has spurred a backlash.

The mother of invention

The technical details of OCID are not complicated. It’s a set of projects that provides Kubernetes with the ability to obtain and run container images by way of a version of the core of the Docker runtime — the “runC” project — that has been modified to fit Kubernetes’ needs.

Some of these modifications are purely practical, providing Kubernetes with features that are useful when running containers at scale, such as being able to verify if a current container image is the same as one found in a container registry.

Other features are more strategic variations on existing Docker functionality, with philosophical differences that stem from how Kubernetes is used in production. The OCID storage driver, for instance, “provide[s] methods for storing filesystem layers, container images, and containers,” according to Red Hat, but allows storage images to be mounted and handled more like Linux filesystems, instead of in-memory objects only known to Docker.

Fork in the road ahead

Reading between the lines of the news release, there are strong hints that the OCID project arose because Red Hat found itself at odds with the pace and path of Docker’s development.

According to the release, work on the storage component of OCID was hobbled because “upstream Docker was changing at a rate that made it difficult to build off of.” Likewise, when Red Hat proposed remote examination of a container as a possible standard add-on, “the Docker community showed little interested in such a capability.”

Chalk this up to the fact that Red Hat and Docker generally aim for different audiences. Red Hat targets enterprises that want to run applications at scale by way of a whole gamut of tools: as its newly container-centric Linux stack, its OpenShift container platform (version 3.3 was released today as well), and its focus on Kubernetes as the mechanism for combining and managing things together. The sheer size of such a stack, and the demands made on it by an enterprise, mean it can’t be built on shifting sands.

What Red Hat wants

Docker, on the other hand, has been driven more by the enterprise developer than by the enterprise itself. It isn’t afraid to iterate quickly and assume its audience is agile enough to keep up. It has also been attempting to present itself as a one-stop, end-to-end solution for deployment.

Bundling Docker Swarm as a native orchestration solution, for instance, was meant to provide an out-of-the-box option to get a cluster running — and to give Docker users a reason to use Docker-native tools generally. But Kubernetes is making a case for itself, both because of its open-ended community and because people serious about scale (such as OpenStack) tend to turn to Kubernetes as a once-and-for-all solution.

It’s not in Red Hat’s best interest to seem divisive, though. To that end, the announcement about the OCID is liberally salted with statements of open source goodwill: Red Hat wants to “drive broad collaboration” by contributing these tools back to the container ecosystem at large and by “engaging with upstream open source communities.”

But Docker is under no obligation to accept any particular pull request. And if Red Hat’s intention is to build a powerful container stack that’s distinctly its own, it will be all but obliged to diverge from Docker. The question isn’t whether Red Hat will do so, but by how much and to what end.

[An earlier version of this story incorrectly stated that the Open Container Initiative (OCI) as well as Red Hat was part of the OCID project. The OCI is not involved with OCID.]

This story, “New Red Hat project looks a lot like a Docker fork” was originally published by InfoWorld.

[Source:- JW]

Office Delve for Windows 10 makes its way to Windows 10 Mobile in Preview

Image result for Office Delve for Windows 10 makes its way to Windows 10 Mobile in Preview

Delve, the newest addition to Office 365, has still not been officially announced but that doesn’t stop the app from coming to mobile. It was a PC-only UWP applicationuntil earlier today. The app was also available on Apple’s iOS and Google’s Android operating systems, but not for Microsoft’s own mobile platform.

To download Delve, all you have to do is go to the link at the bottom of the article, but to use it you, unfortunately, need a Work or School account, as with many preview apps on the Windows Store.

But what is Delve? Well, the official description describes it like this:

Delve helps you stay in the know, powered by who you know and what they are working on. With this preview app for Windows 10, you’ll be notified about document updates, and get document suggestions that are relevant to your work. You can also find people and get back to your recent documents and attachments, all in one place – all in one app.

Key features of the app:

  • Get updates about what your colleagues are working on
  • Find relevant documents and attachments based on people you know
  • Get back to important documents you’re actively working on

[Source:- Winbeta]

Robot offers safer, more efficient way to inspect power lines

Robot offers safer, more efficient way to inspect power lines

A team led by the University of Georgia’s Javad Mohammadpour has designed, prototyped and tested a robot that can glide along electrical distribution lines, searching for problems or performing routine maintenance. Credit: Mike Wooten/University of Georgia

A robot invented by researchers in the University of Georgia College of Engineering could change the way power lines are inspected—providing a safer and most cost-effective alternative.

Currently, line crews have to suit up in protective clothing, employ elaborate safety procedures and sometimes completely shut off the power before inspecting a power line. It can be difficult, time-consuming and often dangerous work.

A team led by Javad Mohammadpour, an assistant professor of electrical engineering, has designed, prototyped and tested a robot that can glide along electrical distribution lines, searching for problems or performing routine maintenance.

Distribution lines carry electricity from substations to homes, businesses and other end users.

The self-propelled robot looks like a miniature cable car and is approximately the size of a coffee maker, much smaller and lighter than similar devices now used by utility companies.

“This is a tool that’s small enough for a single utility worker to pack in a truck or van and use daily,” Mohammadpour said. “Some of the robots currently in use weigh 200-300 pounds while our robot is only 20-25 pounds.”

Equipped with a spinning brush, the robot can clear utility lines of vegetation, bird droppings, salt deposits-a problem particular to coastal areas-or other debris that may degrade the line. It also has an onboard camera, which allows crews to closely inspect potential problem areas. The robot is wireless and can be controlled by a smartphone, tablet or laptop.

Robot offers safer, more efficient way to inspect power lines
Javad Mohammadpour, Farshid Abbasi and Rebecca Miller teamed up to design a light-weight robot that can glide along electrical distribution lines, searching for problems or performing routine maintenance. Credit: Mike Wooten/University of Georgia

Mohammadpour worked with doctoral student Farshid Abbasi and master’s student Rebecca Miller on the project. Abbasi focused on the mechanical design of the robot while Miller developed the device’s programming, electronics and sensors. Their work was funded by Georgia Power.

“This is our first prototype, and there are a number of advances we would like to explore, including making the robot more autonomous,” Mohammadpour said. “For example, some decision-making could be made on board. If the robot detects a problem, it could send a signal to the controller instead of requiring a person to monitor the robot in real time.”

In addition, Mohammadpour said the robot could be outfitted with GPS technology. This would allow the robot to geo-tag potential problems along electrical lines, alerting utility workers to the need for follow-up inspections at specific locations.

Robot offers safer, more efficient way to inspect power lines
A team led by the University of Georgia’s Javad Mohammadpour has designed, prototyped and tested a robot that can glide along electrical distribution lines, searching for problems or performing routine maintenance. Credit: Mike Wooten/University of Georgia
Robot offers safer, more efficient way to inspect power lines
A team led by the University of Georgia’s Javad Mohammadpour has designed, prototyped and tested a robot that can glide along electrical distribution lines, searching for problems or performing routine maintenance. Credit: Mike Wooten/University of Georgia

 

[Source:- Phys.org]