Node.js 7 set for release next week

Node.js 7 set for release next week

The Node.js Foundation will release version 7 of the JavaScript platform next week. With the new release, version 6 will move to long-term support, and version 0.10 will reach “end of life” status.

Node 7, offered in beta in late September, is a “checkpoint release for the Node.js project and will focus on stability, incremental improvement over Node.js v6, and updating to the latest versions of V8, libuv, and ICU (International Components for Unicode),” said Mikeal Rogers, Foundation community manager.

But version 7 will have a short shelf life. “Given it is an odd-numbered release, it will only be available for eight months, with its end of life slated for June 2017,” Rogers said. “Beyond v7, we’ll be focusing our efforts on language compatibility, adopting modern Web standards, growth internally for VM neutrality, and API development and support for growing Node.js use cases.”

The release of a new version means status changes for older versions. Most important, users on the 0.10 line need to transition off of this release at once, since it will no longer be supported after this month, the Foundation said. There will be no further releases, including security or stability patches.The 0.12 release, meanwhile, goes to End of Life status in December.

“After Dec. 31, we won’t be able to get OpenSSL updates for those versions,” Rogers said. “So that means we won’t be able to provide any security updates. Additionally, the Node.js Core team has been maintaining the version of V8 included in Node.js v0.10 alone since the Chromium team retired it four years ago. This represents a risk for users, as the team will no longer maintain this.”

Version 6 becomes an Long Term Support (LTS) release today. “In a nutshell, the LTS strategy is focused on creating stability and security to organizations with complex environments that find it cumbersome to continually upgrade Node.js,” Rogers said. “These release lines are even-numbered and are supported for 30 months.”

Node v6 is the stable release until April 2018, meaning that new features only land in it with the consent of the Node project’s core technical committee. Otherwise, changes are limited to bug fixes, security updates, documentation updates, and improvements where the risk of breaking existing applications is minimal. After April 2018, v6 transitions to maintenance mode for 12 months, with only critical bugs and security fixes offered, as well as documentation updates.

“At the current rate of download, Node.js v6 will take over the current LTS line v4 in downloads by the end of the year,” Rogers said. “Node.js v4 will stop being maintained in April 2018.”

 

 

[Source:- JW]

 

New software helps to find out why ‘jumping genes’ are activated

Image result for New software helps to find out why 'jumping genes' are activated

The genome is not a fixed code but flexible. It allows changes in the genes. Transposons, however, so-called jumping genes, interpret this flexibility in a much freer way than “normal” genes. They reproduce in the genome and chose their position themselves. Transposons can also jump into a gene and render it inoperative. Thus, they are an important distinguishing mark for the development of different organisms.

Unclear what triggers transposon activity

However, it is still unclear how jumping genes developed and what influences their activity. “In order to find out how, for instance, climate zones influence activity, we must be able to compare the frequency of transposons in different populations — in different groups of individuals,” explained bioinformatician Robert Kofler from the Institute of Population Genetics at the University of Veterinary Medicine, Vienna. But this frequency has not yet been determined precisely.

New software for a low-priced method

Transposons are detected by DNA sequencing. But this detection cannot be carried out for every single member of a population. “At the moment, this would go beyond the available resources regarding finance and amount of work. The only — and much cheaper — option is to analyse an entire population in one reaction,” explained last author Christian Schlötterer. This method, which he has established using the example of fruit flies, is called Pool-Seq. It is also routinely applied to detect transposons. Existing analysis programmes, however, could not provide a precise result in this case. So far, each analysis has been biased by different factors such as the sequencing depth and the distance between paired reads.

For this purpose, Kofler developed the new software PoPoolationTE2. “If we sequence entire populations, each reaction provides a different result. The number of mixed individuals is always the same, but the single individuals differ,” explained Kofler. Furthermore, technical differences in the sample processing, among others, have influenced the analysis so far. PoPoolationTE2 is not affected by these factors. Thus, questions about the activity of transposons can be answered precisely for Pool-Seq reactions.

Interesting for cancer research

“The unbiased detection of transposon abundance enables a low-price comparison of populations from, for instance, different climate zones. In a next step, we can find out if a transposon is very active in a particular climate zone,” said Kofler. In principle, the bioinformatician has developed this new software for Pool-Seq. But as this method is also applied in medical research and diagnosis, the programme is also interesting for cancer research or the detection of neurological changes since transposons also occur in the brain.

Lab experiments confirm influencing factors

Lab experiments can indicate the factors influencing transposons. Last author Schlötterer explained these factors referring to an experiment with fruit flies: “We breed a hundred generations per population and expose them to different stimuli. We sequence at every tenth generation and determine if a stimulus has influenced the activity of the transposons. Thus, we can describe the activity of transposons in fast motion, so to say.” If the abundance is low, the scientists assume that the transposons are only starting to become more frequent. If a transposon reproduces very quickly in a particular population, this is called an invasion. If a jumping gene is detected in an entire population and not in another one, it could have been positively selected.

 

[Source:- Science Daily]

New 3D viewer for improved digital geoscience mapping

Image result for New 3D viewer for improved digital geoscience mapping

Over the years, techniques and equipment for digital mapping have revolutionized the way geoscience field studies are performed.

Now a unique new software for virtual model interpretation and visualization, is to be presented at the 2nd Virtual Geoscience Conference (VGC 2016) in Bergen, Norway.

The conference will take place on the 21-23 of September, and represents a multidisciplinary forum for geoscience researchers, geomatics and related disciplines to share their latest developments and applications.

Simon Buckley and colleagues at Uni Research CIPR are not just hosting the conference in Bergen, but will present their latest contribution to the field:

High performance 3D viewer

A software called LIME, which is a high performance 3D viewer that can be highly useful for geoscientists returning to their office after fieldwork.

The software allows them to explore their 3D datasets and perform measurements, analysis and advanced visualization of different data types. The software is developed by the Virtual Outcrop Geology Group (VOG), a collaboration between Uni Research CIPR in Bergen and the University of Aberdeen, UK.

“The group has been at forefront of digital outcrop geology for over ten years, pioneering many of the developments in data acquisition, processing, and distribution. To facilitate the interpretation, visualisation and communication of 3D photorealistic models, we have developed LIME for over five years,” Buckley says.

On the researcher’s own laptop

One of the unique things about LIME is that it can be downloaded and used on the researcher’s own laptop, and can handle very large 3D datasets with high performance.

“It allows the users to integrate 3D models from processing software, and do analysis and interpretation, to put together lots of types of data collected in fieldwork,” Buckley explains.

Digital mapping technology for many geoscience applications is based on a combination of 3D mapping methods: laser scanning and photogrammetry — 3D modelling from images — from the ground, from boats, and from helicopters for very large mountainsides.

And more recently: from unmanned aerial vehicles, or drones.

“In addition to this we focus on fusing new imaging techniques for mapping surface properties. An example is hyperspectral imaging, an infrared imaging method that allows thesurface material content of an outcrop, building or drill core to be mapped in detail and remotely. This is what I call phase one of the digital geosciences mapping revolution, which has now become relatively mature,” Buckley says.

Integration of multiple techniques

In phase two, collection of data from digital mapping is becoming ubiquitous, but researchers around the world who are new to using this type of data can still struggle with learning curves, making it difficult for the, to analyze their models, Buckley at Uni Research CIPR underscores. This is the basis for LIME:

“Here is our advantage, as we work on the integration of multiple techniques and data types, interpretation software like LIME, databases for storing, accessing and manipulating the data, and mobile devices — viewing and interpretation on tablets, in the field,” Buckley says.

The models collected using digital mapping techniques, combined with the LIME software, enables geologists to study exposed outcrops and rock formations which are otherwise very difficult to access.

“Looking at details of the outcrop and dropping in new sorts of data all of a sudden becomes easier,” Buckley says. Examples are integration of interpretation panels, geophysical data or a new sedimentary log, which looks at different rock types.

Key features

One of the key features of the high performance 3D viewer, is that you can integrate images and project them on to the 3D models.

“Geoscientists are therefore able to integrate different types of field data, making it a powerful tool,” Buckley explains:

“In the end, we can make a very nice visual representation to show the analysis and the project datasets, which is very useful for geoscientists who want to present their results, for example to their collaborating partners and sponsors, to the public, or at conferences,” Buckley says.

“Thanks to the technology and application convergence, the adoption of digital mapping techniques is having a major impact in many areas of the geosciences and beyond,” Buckley says.

 

[Source:- Science Daily]

Enhancing the Power of Elastic Email via CRM Integration

Elastic Email Integration

Elastic Email is a powerful email platform that can help improve email marketing campaigns by easily creating newsletters and sending email more efficiently. However, it still needs people to create or update marketing lists, process unsubscribes in a CRM system and create and send campaign reports for analysis. This takes time, is error prone and is an unnecessary employee cost. By integrating Elastic Email with CRM systems it is possible to remove the costly administration from email marketing.

Synchronising marketing lists and unsubscribes

Contact lists are a vital component of marketing campaigns and therefore need to be managed and updated on a regular basis. If your business uses a CRM system to collate and manage these contact lists then updating these manually in Elastic Email will be a costly, employee intensive process.

TaskCentre can automatically synchronise your marketing lists between Elastic Email and your CRM system by a scheduled or database event. It will also write back all your campaign unsubscribes and hard bounces to your CRM system that have been encountered by Elastic Email.

Full email marketing automation to business rules

From time to time you might need to run ‘unplanned’ email campaigns. Factors such as slow moving stock or pressures to cross sell/up sell all mean more email campaigns need to be processed by the marketing team. Yet, finding the time to run these unplanned campaigns can be difficult.

TaskCentre can automatically create and send Elastic Email campaigns based on data events you define e.g. slow moving stock. It will also update your CRM application with the results.

Automating campaign report distribution

Once an email campaign has been sent you will probably need to generate a report detailing the open and click through results. This report will then be required by the salesteam to update the CRM system and set up follow-up activities. More administration for you and your company to absorb.

TaskCentre can automatically create and distribute open and click through reports and dynamically update CRM systems. Removing this administration allows your sales team to focus on the primary objective of sales.

The business benefits of integrating Elastic Email with your CRM solution include:

  • Removal of time consuming bi-directional data entry
  • Eradication of the risk of sending inappropriate communications to contacts whose statuses have changed in one application (CRM) but not your other systems (Elastic Email).
  • Improvement in employee productivity

If you want to find out more about Elastic Email integration or have any questions about what business process automation and application integration can do for your business call us on +44 (0)330 99 88 700.

 

 

[Source:- orbis-software]

Monster Oracle update patches database, Java

Monster Oracle update patches database, Java

Bigger is not necessarily better, but it’s beginning to look like Oracle will release a monster Critical Patch Update (CPU) every quarter. These security updates affect databases, networking components, operating systems, applications server, Java, and ERP systems, leaving IT administrators to wrestle with the task of testing, verifying, and deploying several dozen patches in a timely manner.

The CPU is getting bigger — the average number of vulnerabilities patched in 2014 and 2015 was 128 and 161, respectively, compared to this year’s average of 228 vulnerabilities — but most of the focus remains on the company’s middleware products. Of the 253 security flaws fixed in the October Critical Patch Update (CPU), Oracle Database, MySQL, Java, Linux and virtualization products, and the Sun Systems suite accounted for only one-third of the patches. Oracle addressed 12 vulnerabilities in its core Oracle Database Server, 31 in the MySQL database, seven in Java SE, 13 in Oracle Linux and virtualization products, and 16 in the Sun Systems suite, which includes Solaris and Sparc Enterprise.

Several vulnerabilities are considered critical and could be remotely exploited without requiring authentication.

Database is important again

The last several updates from Oracle addressed few database flaws, but this latest CPU showed the flagship product a little bit of love. Oracle Database Server has nine new security fixes, of which only one was rated critical with a CVSS v3 base score of 9.1. However, that vulnerability in OJVM (CVE 2016-5555), which affects Oracle Database Server 11.2.0.4 and 12.1.0.2, cannot be remotely exploited over a network without requiring user credentials. In contrast, the six-year-old vulnerability in the Application Express component (CVE-2010-5312) has a CVSS v3 score of 6.1 but can be exploited over the network without authentication.

An issue with the DBA-level privileged accounts (CVE 2016-3562) applies to client-only installations and doesn’t need to have Oracle Database Server installed.

Two vulnerabilities in Oracle Secure Backup may be remotely exploitable without authentication, but were rated 5.8 on the CVSS v3 scale, making them of medium severity. The last security flaw, in Oracle Big Data Graph, is related to the Apache Commons Collections and is not remotely exploitable without authentication.

For Oracle MySQL, the most serious security flaws are in the Server:Security:Encryption component (CVE-2016-6304) and in the Python Connector (CVE-2016-5598) because they may be remotely exploited without authentication. Even so, Oracle did not consider these issues critical, assigning them CVSS v3 scores of 7.5 and 5.6, respectively. There were three fixes for the Encryption component and six for InnoDB.

Databases are typically not exposed to the internet, but administrators should plan on patching the vulnerabilities in MySQL Connector and Application Express as they are remotely exploitable and attackers can use them after compromising another system on the network.

Keep that Java patched

Administrators who support Java applications should pay close attention to the Java patches, as Oracle released seven important security updates that affect every version of Java Platforms 6, 7, and 8, and eight critical security updates for Oracle’s Java-powered WebLogic and GlassFish application platforms. Nearly all of the disclosed vulnerabilities are remotely exploitable without authentication, meaning any application running on the current or earlier versions of these Java products could be susceptible to remote attacks and exploitation.

Two of the Java Platform vulnerabilities affect the Java Management Extensions (JMXs) and Networking APIs built into the Java Platform. Critical Java applications are likely operating with these flawed APIs and should be prioritized for patching as quickly as possible.

“These two APIs are present and loaded in all but the most trivial Java applications,” said Waratek CTO John Matthew Holt.

The CVSS scores for the Java security flaws assume that the user running the Java applet or Java Web Start application has administrator privileges. This is a common user scenario in Windows, which is why the scores are so high. In environments where users do not have administrator privileges — a typical situation for Solaris and Linux users, and also for some Windows users — the impact scores drop significantly. A CVSS v3 base score of 9.6 for a Java SE flaw drops to 7.1 in those deployments, Oracle said in the advisory.

Java on Windows machines should have priority. This advisory also shows why it pays off for Windows administrators to not give higher privileges by default to their users.

“Users should only use the default Java Plug-in and Java Web Start from the latest JDK or JRE 8 releases,” Oracle said.

Even though Oracle WebLogic Server and Oracle Glassfish Server are grouped into Oracle Fusion Middleware, Holt highlighted the five vulnerabilities in WebLogic and two in GlassFish that are remotely exploitable over HTTP and HTTPS protocols without authentication. A successful exploit against critical business applications on Java-powered WebLogic and GlassFish applications could hijack the application stack and expose confidential application data.

Remote exploits over HTTP/HTTPS pose serious risks due to the “ubiquity of HTTP/HTTPS access to Java-powered applications,” Holt warned.

Fixes in for Oracle Linux and Sun Systems, too

Oracle also fixed 13 flaws in Oracle Virtualization, four of which are remotely exploitable without authentication. Eight flaws affected Oracle VM VirtualBox, and the most critical one, affecting the VirtualBox Remote Desktop Extension (CVE-2016-5605), applies to every single version of VirtualBox prior to 5.1.4.

Much like the database issues, the flaw in VirtualBox’s OpenSSL component (CVE-2016-6304) should be prioritized and patched immediately because attackers can use this flaw as they move laterally through the network.

On the operating system, Oracle fixed 16 vulnerabilities in the Oracle Sun Systems Products Suite, which includes Solaris and the Sun ZFS Storage Appliance Kit. The CVSS v3 scores range from 2.8 to 8.2, but three issues that can be exploited over a network without requiring user credentials are all of low severity. Even so, administrators should pay attention to the fixes for ZFS Storage appliance’s DNS, the IKE component in Solaris, and HTTP in Solaris because of the risk of a remote attack.

Set the priority list

Organizations prioritize patches differently. One with a lot of Java users on Windows would bump up the patches’ priority higher than one that’s a pure-Linux shop. Critical business applications on WebLogic will need some attention, as will those organizations that use VirtualBox throughout their virtualized infrastructure.

Researchers at ERPScan sorted the fixed vulnerabilities by their CVSS v3 scores and noted that the flaw in Oracle WebLogic Server (CVE-2016-5535), which affects versions 10.3.6.0, 12.1.3.0, 12.2.1.0 and 12.2.1.1, was third on the list. A successful attack can result in a takeover of Oracle WebLogic Server. The vulnerability in JavaSE’s Hotspot subcomponent (CVE-2016-5582) was fifth. While easily exploitable, a successful attack using this vulnerability would require human interaction from a person other than the attacker.

Oracle didn’t indicate whether any of these flaws have been exploited in the wild, but warned against skipping the patches in favor of workarounds. While it’s possible to reduce the risk of successful attack by blocking network protocols or removing certain privileges or access to certain packages, they do not correct the underlying problem.

“Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply CPU fixes as soon as possible,” the company wrote in the advisory accompanying the CPU release.

 

[Source:- JW]

 

Azure SQL Data Warehouse brings MPP to Microsoft cloud

Image result for Azure SQL Data Warehouse brings MPP to Microsoft cloud

“It is an extremely high-performance MPP service, with column store indexing,” according to Andrew Snodgrass, analyst at Directions on Microsoft in Kirkland, Wash. “Azure SQL Data Warehouse can put numerous processors to work on queries, returning results much faster than any single server.”

For many smaller companies, a data warehouse is still new, and dedicating staff to nurse and feed the warehouse is a burden. Cloud can be a benefit there. But even large companies with established data warehousing programs are currently reviewing their options.

That is one reason Microsoft is promoting the new offering as an alternative to on-premises data warehouses, especially those that focus on producing monthly reports. Today, these systems may have low utilization over much of the month, and then find high use as monthly reports come due.

That is a perfect case for cloud, a Microsoft data leader asserts. In an online blog last week discussing Azure SQL Data Warehouse’s move to general availability, Joseph Sirosh, corporate vice president overseeing Microsoft’s Data Group, described Azure elastic cloud computing as a means to efficiently marry processing with workload requirements for data warehousing. The service has been available in preview releases since June of last year.

Broad shoulders of column architecture

While some column-based analytics systems go back over 20 years, broad use of the column data architecture was still fairly new in 2012 whenAmazon tapped such technology for its Redshift data warehouse in the cloud. Such software was principally useful when data warehouses were required to support large numbers of user queries against their data stores.

Redshift seriously upped the ante in cloud data warehouses, making them more than commodity type products. As Redshift became a prominent part of Amazon’s cloud portfolio, it put pressure on Microsoft to add similar capabilities to its Azure line, while at the same time supporting basic SQL Server compatibility. While it has taken a while to achieve, viewers said this release of a Microsoft cloud-based data warehouse is still timely.

“Microsoft is playing catchup to some extent. But it was required to make some significant changes. Their strategy now is cloud first,” said Ben Harden, principal for data and analytics at Richmond, Virg.-based services provider CapTech Ventures Inc.

Harden said cloud computing is a very influential trend and that CapTech is now seeing demand for both Amazon and Microsoft cloud implementations.

The wait for Microsoft’s cloud data warehouse may have been worthwhile, according to Joe Caserta, president at Caserta Concepts LLC, a New York-based data consultancy that has partner agreements with both Amazon and Microsoft analytics.

“I am kind of glad they waited until they were ready,” he said. “They now have a good core set of tools.”

Scalability on a scale of one to ten

The ability to scale up to handle peak data loads is a plus for Azure SQL Data Warehouse, according to Paul Ohanian, CTO at Pound Sand, an El Segundo, Calif.-based electronic game developer that has worked with the new Microsoft software. A major use has been to produce analytics to track players’ behavior, identify trends and create projections. Scaling up was a concern, he said.

PRO+

Content

Find more PRO+ content and other member only offers, here.


  • E-Handbook

“Our game was featured in the iOS App Store for seven weeks over Christmas. We went from testing a game with about 1,000 people overseas to all of a sudden getting half-a-million users in six days. But Azure SQL Data Warehouse allowed us to very easily scale from something like 1,000 users to 100,000 users,” he said. “Literally, we saw that rise in eight hours.”

Handling such issues was his team’s original goal, according to Ohanian. “When we shipped our game, the scaling totally worked,” he said.

Ohanian said his group has been an Azure cloud user for a number of years, but that it looked at other cloud and analytics alternatives. Affinities between APIs already in use and APIs for Azure SQL Data Warehouse were a factor in choosing the Microsoft software, he said.

By his estimation, the cloud data warehouse service is not too late to the fair, and it is full-fledged in important areas such as management.

“Maybe it’s because it is coming later, but it seems to have streamlined some of the complexity,” Ohanian said. “It is easy to get things up and going, and to manage.”

Ohanian found favor with Azure SQL Data Warehouse pricing, which separates expenses for storage and computing. Starting Sept. 1, data storage will be charged based on Azure Premium Storage rates of $122.88 per 1TB per month; at that time, compute pricing will be about $900 per month per 100 data warehouse units, according to Microsoft. Data warehouse units are the company’s measure for underlying resources such as CPU and memory.

It is important for Microsoft to field data warehouse products suited for its established users, so, naturally, C# and .NET developers continue to be a target of Azure cloud updates.

“We are seeing pretty equal demand between Amazon and Azure clouds. It is boiling down to what skill sets users have today,” said consultant Harden. “People are taking the path of least resistance where skills are concerned.”

Path of least resistance leads me on

The availability of Microsoft’s massively parallel processing cloud data warehouse augers greater competition in cloud data, which is welcome by some.

An example is Todd Hinton, vice president of product strategy at RedPoint Global in Wellesley, Mass., a maker of data management and digital marketing tools. He is not alone in saying greater competition in the cloud data space could be good, or being unwilling to pick a winner just yet.

“I think they are going to be head-to-head competitors. You have Amazon fans and you have Microsoft fans. It’s almost like the old operating system battle between Linux and Windows. For our part, we are data agnostic. We will be interested to see how Azure SQL Data Warehouse shakes out.”

Like others, his company’s software supports both Amazon and Azure. The company offers direct integration with AWS Redshift already, and he said he expects it will offer native support for SQL Azure Data Warehouse later this year.

Competition in cloud data warehouses goes further too, with players ranging from IBM, Informatica, Oracle and Teradata to Cazena, Snowflake Computing, Treasure Data and others. While it is not as hot as the Hadoop or Spark data management cauldrons, releases like Microsoft’s show the cloud data warehouse space is heating up.

 

[Source:- techtarget]

Solution for secure processing of patient data revealed

Image result for Solution for secure processing of patient data revealed

Thanks to a technique developed by Radboud University large-scale research involving patient data can be done without threat to either the security of the information or the privacy of the patients. This technique will be used for a new, large-scale study of Parkinson’s disease.

Collecting and analysing medical data on a large scale is an increasingly important research tool in understanding illnesses. To quickly arrive at new insights and avoid double work, it is important that international researchers work together to use and enrich one another’s data. Such studies often involve sensitive patient information. Patients must be confident that their privacy will be safeguarded and their data securely stored in line with upcoming European regulations on privacy, known as the strictest in the world.

To make this possible, Professor Bart Jacobs and Professor Eric Verheul, both computer scientists at Radboud University, have developed the Polymorphic Encryption and Pseudonymisation (PEP) technique. The PEP technique realises this goal by pseudonymising and encrypting data in such a way that the data cannot be accessed even by the party who stores the data. Moreover, access to the data is strictly regulated and monitored. The PEP technique makes it possible to analyse data from a study while ensuring that a patient’s privacy is safeguarded.

One of the first applications of the PEP technique is a study of Parkinson’s that was initiated by Radboud University. In this study, 650 people with Parkinson’s will be monitored for two years by means of, among other things, portable measuring equipment (wearables). Thanks to the PEP technique, the research data collected in the Netherlands can be shared in pseudonymised form with top researchers throughout the world.

Public investment in privacy

“In the context of international medical research, personal information is worth its weight in gold. So it’s important for the government to invest in an infrastructure that guarantees the protection of this information,” said Bart Jacobs, Professor of Digital Security at Radboud University. “Especially to ensure that people will remain willing to participate in future studies of this sort.” Radboud University and Radboud university medical center are investing €920,000 in the development of the PEP software. The Province of Gelderland is contributing €750,000. The software will be made available as open source so that other parties may also use it.

Bart Jacobs is optimistic about the future of the PEP system. “The study of Parkinson’s should demonstrate the usefulness of PEP. With this showcase as an example, PEP could grow to become the international standard for storing and exchanging privacy-sensitive medical data.” The first reactions from the field are positive, Jacobs concluded.

In short, Polymorphic Encryption and Pseudonymisation works as follows:

· the managers of the data cannot access the data

· participants in the study decide for each study if they want to allow their data to be used

· researchers who use the data are given a unique key

· the participants have a different pseudonym for each researcher. This prevents researchers from using another route to access data that they are not allowed to see.

 

 

[Source:- Science Daily]

Achieving Operational Efficiency via Workflow Automation

Operational Efficiency via Workflow Automation

Operational efficiency is the capability of an organisation to deliver products or services to its customers in the most cost-effective manner possible while still ensuring the high quality of its products, service and support. Unsurprisingly, improving operational efficiency is a fundamental objective for the majority of businesses.

The main contributing dynamic to operational efficiency is workflow. It’s therefore surprising how many organisations still depend on a large amount of manual processing, using legacy or siloed systems, paper-based forms and excel spreadsheets, rather than automating these mundane tasks that underpin the smooth running of a business.

Automating Workflow

Workflow automation is about streamlining and automating business processes, whether it is for finance, sales & marketing, HR or distribution. Deploying workflow automation to each department’s everyday business processes via will reduce the number of tasks employees would otherwise do manually, freeing them to spend more time thinking about value-added tasks.

This essentially allows more things to get done in the same amount of time and will speed up production and reduce the possibility of errors. Tasks simply cannot get ignored or bypassed with an automated workflow system in place.

If the right processes are established then every employee will know what is expected of them and what they are accountable for. Any deviation will be escalated to management via a notification. In fact, management can benefit from being able to see exactly what is going on from a macro level, rather than having to request updates and reports.

If there is a bottleneck in the workflow then managers can make an informed decision and act on it immediately. With the ability to measure workflow, businesses can also understand where it is possible to improve.

Workflow automation can also help organisations maintain standards and compliance by configuring the workflow to make sure all essential activities and outcomes are tracked and escalated. Aligning workflows with policy makes it straightforward for users to comply.

TaskCentre’s Workflow & Human Interaction Capabilities

TaskCentre can integrate all business systems within an organisation. The Workflow & Human Interaction capability can then provide organisations with a powerful and flexible workflow automation solution that ensures business rules are adhered to and administration is removed.

It permits users to receive and authorise multi-level workflow jobs, regardless of device and business system the workflow starts or ends within, and creates a 100% audit trail for complete piece of mind.

Here are a few examples of processes that can benefit from workflow automation:

Finance

  • Bank account management
  • Invoice processing
  • Governance and compliance

Sales

  • Automate lead follow-up notifications
  • Trigger alerts regarding high-quality leads
  • Notify account manager about contract expirations

Marketing

  • Upsell, Cross-sell and Cycle-based sell opportunities
  • Follow up shopping cart abandonments
  • Create automated email responses for enquiries

Production

  • Supplier and contract management
  • New product development
  • Procurement and work order approvals

Getting the Best ROI from Workflow

Adding workflow capabilities to your system will revolutionise your business, but it is important to get workflow automation correct from the start otherwise it will only create problems in the future. Organisations need to understand what inefficiencies and business processes they want to address and make sure that everyone involved in the various workflows contributes to their design. If something is missing then the workflow won’t work.

Once it has been created and tested the workflow ideally needs to be documented and communicated to the users. Human interaction will be required at some point in the workflow, so employees need to know what is expected of them. Additionally, if someone leaves the organisation having the workflow documented will enable the next person to pick up the process very quickly, thus not adversely affecting the process.

 

 

[Source:- orbis-software]

ARM builds up security in the tiniest IoT chips

ransomware hardware security embedded circuit board integrated controller

IoT is making devices smaller, smarter, and – we hope – safer. It’s not easy to make all those things happen at once, but chips that can help are starting to emerge.

On Tuesday at ARM TechCon in Silicon Valley, ARM will introduce processors that are just a fraction of a millimeter across and incorporate the company’s TrustZone technology. TrustZone is hardware-based security built into SoC (system on chip) processors to establish a root of trust.

It’s designed to prevent devices from being hacked and taken over by intruders, a danger that’s been in the news since the discovery of the Mirai botnet, which recently took over thousands of IP cameras to mount denial-of-service attacks.

“What ARM is trying to do is plug the holes before they can get started,” said analyst Bob O’Donnell of Technalysis Research.

As the array of IoT products expands into things like connected toothbrushes, many are being made by companies that know little about security, he said. ARM recognizes this.

“They’ve taken on the difficult task of trying to embed as much security into the device as possible,” O’Donnell said. It’s a big stretch for ARM, but the company’s well positioned because it already supplies the architecture for most IoT chips, he said.

TrustZone has been around for a decade for Windows, Mac OS and Android products but never for chips this small or low-powered.

The new Cortex-M33 chip design is just one-tenth of a square millimeter, and the Cortex-M23 is 75 percent smaller than that. They’re the first chips based on the new ARMv8-M architecture and are designed to work with ARM’s mbed OS. Chip vendors including Analog Devices, NXP and STMicroelectronics have already licensed the design.

ARM expects chips based on them to be used in products like bandages that collect and send medical data, tracking tags for packages in transit, and portable blood-monitoring devices.

These things won’t be plugged in to an outlet and may not even have batteries: A pocket-sized blood-testing device for diabetics could harvest enough energy to do its job just from the motion of the user removing the cap, ARM says.

Until now, this class of chip has had proprietary security hardware and software in many cases, which caused some limitations, said Nandan Nayampally, vice president of marketing in ARM’s CPU group. Added hardware made them less efficient, and developing different software for every chip duplicated effort.

With TrustZone, the chips can be secured without increasing their footprint, and they can use standard TrustZone software with APIs (application programming interfaces) for adding custom features.

Also on Tuesday, ARM introduced a cloud-based platform for managing and updating IoT processors for as long as they’re deployed. The mbed Cloud software-as-a-service platform is designed to solve the problem of how to manage millions of chips in devices that may be deployed all over a city or a global enterprise.

The platform can get a device set up and connected and then handle firmware updates over time. It also has a role to play in keeping IoT chips secure.

When a device boots up for the first time in the field, mbed Cloud can provide a security key for the communications channel and specify who can get access to the data from the device, based on enterprise policies.

The service can also help to prevent IoT-based denial-of-service attacks by monitoring what’s going on in the network. If there are abnormally chatty devices, it can isolate them or shut them down.

The SaaS platform isn’t just for devices with ARM-based chips or the mbed OS. If customers have legacy devices with other chips running Linux or freeRTOS, for example, ARM has a software module for connecting them to the mbed Cloud.

The service can be run on multiple public clouds, including Amazon’s and IBM’s.

 

 

[Source:- JW]

Azure Data Lake Analytics gets boost from U-SQL, a new SQL variantb

Image result for Azure Data Lake Analytics gets boost from U-SQL, a new SQL variant

The big data movement has frozen out many data professionals who are versed in SQL. Microsoft’s U-SQL programming language tries to get such folks back in the data querying game

One of the dirty little secrets of big data is that longtime data professionals have often been kept on the sidelines….

Hadoop, Spark and related application frameworks for big data rely more on Java programing skills and less on SQL skills, thus freezing out many SQL veterans — be they Microsoft T-SQL adepts or others.

While continuing its push into Azure cloud support for Hadoop, Hive, Spark,R and the like, Microsoft is looking to enable T-SQL users to join the big data experience as well.

Its answer is U-SQL, a dialect of T-SQL meant to handle disparate data, while supporting C# extensions and, in-turn, .NET libraries. It is presently available as part of a public preview of Microsoft’s Azure Data Lake Analytics cloud service, first released last October.

U-SQL is a language intended to support queries on all kinds of data, not just relational data. It is focused solely on enhancements to the SQL SELECT statement, and it automatically deploys code to run in parallel. U-SQL was outlined in detail by Microsoft this week at the Data Science Summit it held in conjunction with its Ignite 2016 conference in Atlanta.

Beyond Hive and Pig

The Hadoop community has looked to address this by adding SQL-oriented query engines and languages, such as Hive and Pig. But there was a need for something more akin to familiar T-SQL, according to Alex Whittles, founder of the Purple Frog Systems Ltd. data consultancy in Birmingham, England, and a Microsoft MVP.

“Many of the big data tools — for example, MapReduce — come from a Hadoop background, and they tend to require [advanced] Java coding skills. Tools like Hive and Pig are attempts to bridge that gap to try to make it easier for SQL developers,” he said.

But, “in functionality and mindset, the tools are from the programming world and are not too appropriate for people whose job it is to work very closely with a database,” Whittles said.

This is an important way to open up Microsoft’s big data systems to more data professionals, he said.

“U-SQL gives data people the access to a big data platform without requiring as much learning,” he said. That may be doubly important, he added, as Hive-SQL developers are still a small group, compared with the larger SQL army.

U-SQL is something of a differentiator for Azure Data Lake Analytics, according to Warner Chaves, SQL Server principal consultant with The Pythian Group Inc. in Ottawa and also a Microsoft MVP.

“The feedback I have gotten from database administrators is that big data has seemed intimidating, requiring you to deploy and manage Hadoop clusters and to learn a lot of tools, such as Pig, Hive and Spark,” he said. Some of those issues are handled by Microsoft’s Azure cloud deployment — others by U-SQL.

“With U-SQL, the learning curve for someone working in any SQL — not just T-SQL — is way smaller,” he said. “It has a low barrier to entry.”

He added that Microsoft’s scheme for pricing cloud analytics is also an incentive for its use. The Azure Data Lake itself is divided into separate analytics and storage modules, he noted, and users only have to pay for the analytics processing resources when they’re invoked.

More in store

While it looks out for its traditional T-SQL developer base, Microsoft is also pursuing enhanced capabilities for Hive in the Azure Data Lake.

This week at the Strata + Hadoop World conference in New York, technology partner Hortonworks Inc. released its version of an Apache Hive update using LLAP, or Live Long and Process, which uses in-memory and other architectural enhancements to speed Hive queries. It’s meant to work with Microsoft’s HDInsight, a Hortonworks-based Hadoop and big data platform that is another member of the Azure Data Lake Analytics family.

PRO+

Content

Find more PRO+ content and other member only offers, here.

  • E-Handbook

    Shining a light on SQL Server storage tactics

Meanwhile, there’s more in store for U-SQL. As an example, at Microsoft’s Data Science Summit, U-SQL driving force Michael Rys, a principal program manager at Microsoft, showed attendees how U-SQL can be extended, focusing on how queries in the R language can be exposed for use in U-SQL.

The R language has garnered more and more support within Microsoft since the company purchased Revolution Analytics in 2015. While R programmers dramatically lag SQL programmers in size of population, R is finding use in new analytics applications, including ones centered on machine learning.

 

[Source:- techtarget]