Oracle re-spins legacy software into cloud growth game

oracle.pngThe demise of legacy enterprise software vendors may be a bit overblown and Oracle appears to be showing a blueprint and a ground game that leverages its customer base into an as-a-service recurring revenue stream.

Oracle’s fourth quarter was strong on many fronts, but can be summed up in two words: Legacy lives. And if you couple Oracle’s results with strong performances from Red Hat and VMware you’ll find that the giants are adapting.
Brad Reback, an analyst at Stifel, put it best in a research note following Oracle’s fourth quarter:

Oracle’s strong print combined with recent solid results from Red Hat and VMware suggest that legacy software vendors continue to improve execution despite the changing IT landscape. It further goes to show how sticky enterprise software is and that legacy players can adjust as they leverage their privileged position within their massive installed base footprint.

You could also add SAP into that mix.

Perhaps these enterprise giants are melding license and cloud models into something hybrid.

The upshot: Enterprise software giants don’t necessarily have to win new cloud customers as much as convert the ones they have into an as-a-service model. Meanwhile, the power of Oracle’s bundle–database, HRM, CRM, ERP, infrastructure–means that customers can cut deals. Oracle has about 13,550 customers in its active software-as-a-service base.

And now let’s toss in Oracle’s cloud at customer play. Oracle noted that AT&T is moving its on-premise database to the cloud. The revenue didn’t show up in Oracle’s fourth quarter, but the deal was strategic. Oracle’s plan is to bridge hybrid and public cloud by running its services in a customer’s data center.

CTO Larry Ellison and CEO Mark Hurd outlined the deal. Ellison noted that there are similar deals to AT&T in the pipeline. “During this new fiscal year, we expect both our PaaS and IaaS businesses to accelerate into hyper-growth, the same kind of growth we’re seeing with SaaS, as our customers begin to migrate their millions of Oracle databases to Generation2 of the Oracle Public Cloud,” said Ellison.

Hurd added the following details on the AT&T deal.

  • AT&T’s deal is ratable over time and Oracle nets more revenue.
  • There are more than 10,000 Oracle databases at AT&T.
  • AT&T wanted the data on-premise due to regulations.
  • “We take our Oracle Cloud machine and we are able now to do all of that with them on their premise and give them all the benefits of the cloud. We manage. We patch. We basically run the cloud for them and we help them get all of that done,” said Hurd.
  • Financials weren’t disclosed, but Hurd said the AT&T transaction is important because of what it represents for the Oracle customer base.
  • Meanwhile, Oracle added several ERP, HCM and CRM customers. These customers are also moving up from on-premises to the cloud.

Oracle’s bet is that on-premise database licenses become cloud machine similar to the way customers have moved apps to the cloud.

[“Source-zdnet”]

Software export growth set to slow: Nasscom

Domestic market expected to grow faster than exports and may touch $26.5 million in FY18

The country’s software export growth is set to slow to 7-8% this fiscal year, down from 8.6% a year earlier, according to industry body Nasscom.

Nasscom expects software export revenue to be between $124-125 billion in the current fiscal year to March 2018. Software exports during the last year ended March 2017 were $116 billion and domestic market revenue, excluding that of hardware, was $24 billion, both in constant currency terms. The domestic market was projected to grow faster than the export market during this fiscal, Nasscom president R.Chandrashekhar said. Revenue from the domestic market may increase 10-11% and touch $26-26.5 billion.

Last fiscal, the industry added $11 billion in revenue, an increase of 8.6% in constant currency and 7.6% in reported currency, despite headwinds in the form of “increased rhetoric on protectionism, elections, Brexit and visa issues.

Macroeconomic uncertainties also led to delay in the decision-making process of customers, while in traditional services the growth was slower on account of the focus on cost optimisation. Currency volatility led to a difference of 1-3% between constant currency and reported currency growth, the National Association of Software & Services Companies (Nasscom) said.

Stating that it was the first instance of Nasscom making the guidance announcement in Hyderabad, Chairman Raman Roy said it is a precursor to the focus that will remain on the city over 8-12 months ahead. Hyderabad will play host to several programmes of Nasscom as well as the prestigious World Congress on IT, which is coming to India for the first time.

‘Inflection point’

The outlook, Nasscom chairman Raman Roy said, comes in the backdrop of the industry being at an interesting “inflection point.”

Mr. Chandrashekhar said improvements in financial services and a high potential in digital businesses would be the key growth drivers. Nasscom had deferred making the announcement in February. Then it had lowered the projections for the last fiscal year.

“The direction today is far clear… have a greater visibility and reasonably confident of what we are talking about,” he said.

An improvement in legacy business and increased automation-based projects would be among the growth drivers, Mr. Chandrashekhar said. The industry also foresaw itself as a net hirer. The number of people to be hired this year would be between 1.3 lakh and 1.5 lakh, he said. The demand will be for technology-skilled professionals and it was imperative for new and existing people to reskill themselves.

Vice-chairman Rishad Premji said a big opportunity for reskilling was emerging and unlike in the past many in the workforce were coming forward to equip themselves with new skills.

[“Source-thehindu”]

BlackBerry’s Banner Year Hits Snag as Software Sales Falter

Image result for BlackBerry’s Banner Year Hits Snag as Software Sales Falter

The Canadian company, which exited the hardware business last year, missed analysts’ estimates for total revenue, the majority of which is now made up of software sales. Revenue excluding some costs was $244 million in the fiscal first quarter compared with the average analyst estimate of $265.4 million.

The shares fell 5.2 percent to $10.49 in early market trading at 7:58 a.m. in New York.

The lower-than-projected sales struck a negative note in what has otherwise been a banner year for the Waterloo, Ontario-based company. Shares have surged more than 60 percent as investors started treating BlackBerry like the growing software company it has turned itself into. An $814 million windfallawarded to end a dispute with Qualcomm Inc. over royalty payments and positive comments from short seller Andrew Left didn’t hurt either.

The Qualcomm payment bolstered BlackBerry’s cash reserves, which now stand at $2.6 billion. That means Chief Executive Officer John Chen could resume making acquisitions to bolster software revenue, a tactic that helped replace some of the company’s evaporating hardware sales over the last three years.

Share Buyback

Some of that cash will go toward share buybacks, with BlackBerry authorizing TD Securities to buy back as much as 6.4 percent of the company’s circulating shares on its behalf. Buybacks have been part of Chen’s tool box in his bid to revive the company’s fortunes. Shares taken out of public circulation are used to offset the company’s employees equity inventive plan.

BlackBerry also re-organized how it reported revenue to reflect its current reality as a software company with a side business in licensing old hardware patents. The new software and services segment accounted for $92 million in revenue, up 12 percent from what would have been $82 million in the same quarter last year.

Chen has said he wants to increase that software number faster than 13 percent a year, as he fights with competitors like International Business Machines Corp. and MobileIron Inc. for the growing market in software that helps companies and governments keep their employees’ devices safe from hackers.

Licensing revenue was $32 million, compared with $25 million last year. Handheld devices revenue, which is made up of licensing agreements for the company’s phone brand to company’s like TCL Corp., was $37 million, compared to $152 million last year when the company still produced its own phones.

[“Source-bloomberg”]

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

A student at the University of Granada (UGR) has designed software that adapts current medical technology to analyze the interior of sculptures. It’s a tool to see the interior without damaging wood carvings, and it has been designed for the restoration and conservation of the sculptural heritage.

Francisco Javier Melero, professor of Languages and Computer Systems at the University of Granada and director of the project, says that the new software simplifies medical technology and adapts it to the needs of restorers working with wood carvings.

The software, called 3DCurator, has a specialized viewfinder that uses computed tomography in the field of restoration and conservation of sculptural heritage. It adapts the medical CT to restoration and it displays the 3-D image of the carving with which it is going to work.

Replacing the traditional X-rays for this system allows restorers to examine the interior of a statue without the problem of overlapping information presented by older techniques, and reveals its internal structure, the age of the wood from which it was made, and possible additions.

“The software that carries out this task has been simplified in order to allow any restorer to easily use it. You can even customize some functions, and it allows the restorers to use the latest medical technology used to study pathologies and apply it to constructive techniques of wood sculptures,” says professor Melero.

 

This system, which can be downloaded for free from www.3dcurator.es, visualizes the hidden information of a carving, verifies if it contains metallic elements, identifies problems of xylophages like termites and the tunnel they make, and detects new plasters or polychrome paintings added later, especially on the original finishes.

The main developer of 3DCurator was Francisco Javier Bolívar, who stressed that the tool will mean a notable breakthrough in the field of conservation and restoration of cultural assets and the analysis of works of art by experts in Art History.

Professor Melero explains that this new tool has already been used to examine two sculptures owned by the University of Granada: the statues of San Juan Evangelista, from the 16th century, and an Immaculate from the 17th century, which can be virtually examined at the Virtual Heritage Site Of the Andalusian Universities (patrimonio3d.ugr.es/).

 

 

[Source:- Phys.org]

 

Software system labels coral reef images in record time

Computer scientists at the University of California San Diego have released a new version of a software system that processes images from the world’s coral reefs anywhere between 10 to 100 times faster than processing the data by hand.

This is possible because the new version of the system, dubbed CoralNet Beta, includes deep learning technology, which uses vast networks of artificial neurons to learn to interpret image content and to process data.

CoralNet Beta cuts down the time needed to go through a typical 1200-image diver survey of the ocean’s floor from 10 weeks to just one week—with the same amount of accuracy. Coral ecologists and government organizations, such as the National Oceanographic and Atmospheric Administration, also use CoralNet to automatically process images from autonomous underwater vehicles. The system allows researchers to label different types of coral and whether they’ve been bleached, different types of invertebrates, different types of algae—and more. In all, over 2200 labels are available on the site.

“This will allow researchers to better understand the changes and degradation happening in coral reefs,” said David Kriegman, a computer science professor at the Jacobs School of Engineering at UC San Diego and one of the project’s advisers.

The Beta version of the system runs on a deep neural network with more than 147 million neural connections. “We expect users to see a very significant improvement in automated annotation performance compared to the previous version, allowing more images to be annotated quicker—meaning more time for field deployment and higher-level data analysis,” said Oscar Beijbom, a UC San Diego Ph.D. alumnus and the project’s manager and founder of CoralNet.

He created CoralNet Alpha in 2012 to help label images gathered by oceanographers around the world. Since then, more than 500 users, from research groups, to nonprofits, to government organizations, have uploaded more than 350,000 survey images to the system. Researchers used CoralNet Alpha to label more than five million data points across these images using a tool to label random points within an image designed by UC San Diego alumnus Stephen Chen, the project’s lead developer.

“Over time, news of the site spread by word of mouth, and suddenly it was used all over the world,” said Beijbom.

Other updates in the Beta version include an improved user interface, web security and scalable hosting at Amazon Web Services.

[Source:- Phys.org]

Transforming, self-learning software could help save the planet

Image result for Transforming, self-learning software could help save the planet

Artificially intelligent computer software that can learn, adapt and rebuild itself in real-time could help combat climate change.

Researchers at Lancaster University’s Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do.

The system — called REx — is being developed with vast energy-hungry data centres in mind. By being able to rapidly adjust to optimally deal with a huge multitude of tasks, servers controlled by REx would need to do less processing, therefore consuming less energy.

REx works using ‘micro-variation’ — where a large library of building blocks of software components (such as memory caches, and different forms of search and sort algorithms) can be selected and assembled automatically in response to the task at hand.

“Everything is learned by the live system, assembling the required components and continually assessing their effectiveness in the situations to which the system is subjected,” said Dr Barry Porter, lecturer at Lancaster University’s School of Computing and Communications. “Each component is sufficiently small that it is easy to create natural behavioural variation. By autonomously assembling systems from these micro-variations we then see REx create software designs that are automatically formed to deal with their task.

“As we use connected devices on a more frequent basis, and as we move into the era of the Internet of Things, the volume of data that needs to be processed and distributed is rapidly growing. This is causing a significant demand for energy through millions of servers at data centres. An automated system like REx, able to find the best performance in any conditions, could offer a way to significantly reduce this energy demand,” Dr Porter added.

In addition, as modern software systems are increasingly complex — consisting of millions of lines of code — they need to be maintained by large teams of software developers at significant cost. It is broadly acknowledged that this level of complexity and management is unsustainable. As well as saving energy in data centres, self-assembling software models could also have significant advantages by improving our ability to develop and maintain increasingly complex software systems for a wide range of domains, including operating systems and Internet infrastructure.

REx is built using three complementary layers. At the base level a novel component-based programming language called Dana enables the system to find, select and rapidly adapt the building blocks of software. A perception, assembly and learning framework (PAL) then configures and perceives the behaviour of the selected components, and an online learning process learns the best software compositions in real-time by taking advantage of statistical learning methods known as ‘linear bandit models’.

The work is presented in the paper ‘REx: A Development Platform and Online Learning Approach for Runtime Emergent Software Systems’ at the conference ‘OSDI ’16 12th USENIX Symposium on Operating Systems Design and Implementation’. The research has been partially supported by the Engineering and Physical Sciences Research Council (EPSRC), and also a PhD scholarship of Brazil.

The next steps of this research will look at the automated creation of new software components for use by these systems and will also strive to increase automation even further to make software systems an active part of their own development teams, providing live feedback and suggestions to human programmers.

[Source:- SD]

Scientists unveil software that revolutionizes habitat connectivity modeling

A trio of Clemson University scientists has unveiled a groundbreaking computational software called “GFlow” that makes wildlife habitat connectivity modeling vastly faster, more efficient and superior in quality and scope.

After eight years of research and development, the revolutionary software was announced in the scientific journal Methods in Ecology and Evolution. Clemson University postdoctoral fellow Paul Leonard is the lead author of the article, “GFlow: software for modeling circuit theory-based connectivity at any scale.” Clemson’s co-authors are Rob Baldwin, the Margaret H. Lloyd-Smart State Endowed Chair in the forestry and environmental conservation department; and Edward Duffy, formerly a computational scientist in the cyberinfrastructure technology integration department who recently left the university to join BMW.

“Historically, landscape connectivity mapping has been limited in either extent or spatial resolution, largely because of the amount of time it took computers to solve the enormous equations necessary to create these models. Even using a supercomputer, it could take days, weeks or months,” said Leonard, who is in the forestry and environmental conservation department along with Baldwin. “But GFlow is more than 170 times faster than any previously existing software, removing limitations in resolution and scale and providing users with a level of quality that will be far more effective in presenting the complexities of landscape networks.”

Habitat connectivity maps are paired with satellite imagery to display the potential corridors used by animal populations to move between both large and small areas. Billions of bytes of data — including fine-grain satellite photographs and on-the-ground research — produce geospatial models of the movements of everything from black bears to white salamanders. These models help federal and state governments, non-governmental organizations and individual landowners redefine their conservation priorities by computationally illustrating the passageways that will need to be preserved and enhanced for animals to be able to continue to intermingle.

“The take-home from this is that you can quickly compute very complicated scenarios to show decision-makers the impacts of various outcomes,” said Baldwin, whose conservation career has spanned decades throughout the United States and Canada. “You want to put a road here? Here’s what happens to the map. You want to put the road over there? We’ll recalculate it and show you how the map changes. GFlow is dynamic, versatile and powerful. It’s a game-changer in a variety of ways.”

When Leonard began his initial work under Baldwin’s tutorship, the existing software used for habitat connectivity mapping was slow, inefficient and consumed enormous amounts of computer memory. Leonard soon realized he would need the expertise of a computational scientist to overcome these frustrating limitations. Thus, his collaboration with Duffy began — and both ended up spending countless hours in front of their computer screens, synthesizing Leonard’s ecological know-how with Duffy’s cyber skills. The end result? A large-scale map that once would have taken more than a year to generate now takes just a few days.

“Until GFlow, the software available for ecologists was poorly conceived in terms of speed and memory usage,” said Duffy, who was the lead developer of the new software. “So I rewrote the code from scratch and reduced individual calculations from about 30 minutes to three seconds. And I also significantly reduced the amount of memory generated by the program. In the old code, one project we worked on took up 90 gigabytes of memory. With GFlow, only about 20 gigabytes would be needed. It’s most efficient when used in conjunction with a supercomputer, but it even works in a more limited capacity on desktop computers.”

GFlow will enable scientists to solve ecological problems that span large landscapes. But in addition to helping animals survive and thrive, GFlow can also be used for human health and well-being. For instance, GFlow has the capacity to monitor the spread of the Zika virus by documenting the location of each new case and then predicting its potential spread to previously uninfected areas.

“This software can monitor the flow of any natural phenomenon across space where there is heterogeneous movement that is based on some resistance to this movement,” Leonard said. “Besides Zika, there are other health and disease patterns that can be modeled using GFlow. And also other natural phenomena, such as the spread of wildfire in the southeastern United States and other areas around the country. We can parameterize wind strength and shifts, how much fuel is on the ground and calculate the spread across really large areas. So we’re examining all these possibilities and are open to collaboration with other domain experts who might be interested in using GFlow.”

Additional contributors to Friday’s journal article were Brad McRae, a senior landscape ecologist for The Nature Conservancy; and Viral Shah and Tanmay Mohapatra of Julia Computing, a privately held company. Ron Sutherland and the Wildlands Network provided valuable data that was used extensively during the development of GFlow

“Collaboration has played a huge role in this,” Baldwin said. “We’ve worked across departments. We’ve worked across boundaries. And without Clemson’s investment in the Palmetto Cluster supercomputer, none of this would have been possible. This collaboration has improved spatial modeling for ecological processes in time and space. And because it’s so computationally efficient, it can be done for extremely large areas — regions, nations, continents or possibly even the entire planet — in unprecedented detail.”

 

[Source:- SD]

Safer, less vulnerable software is the goal of new computer publication

We can create software with 100 times fewer vulnerabilities than we do today, according to computer scientists at the National Institute of Standards and Technology (NIST). To get there, they recommend that coders adopt the approaches they have compiled in a new publication.

The 60-page document, NIST Interagency Report (NISTIR) 8151: Dramatically Reducing Software Vulnerabilities (link is external), is a collection of the newest strategies gathered from across industry and other sources for reducing bugs in software. While the report is officially a response to a request for methods from the White House’s Office of Science and Technology Policy, NIST computer scientist Paul E. Black says its contents will help any organization that seeks to author high-quality, low-defect computer code.

“We want coders to know about it,” said Black, one of the publication’s coauthors. “We concentrated on including novel ideas that they may not have heard about already.”

Black and his NIST colleagues compiled these ideas while working with software assurance experts from many private companies in the computer industry as well as several government agencies that generate a good deal of code, including the Department of Defense and NASA. The resulting document reflects their cumulative input and experience.

Vulnerabilities are common in software. Even small applications have hundreds of bugs (link is external) by some estimates. Lowering these numbers would bring many advantages, such as reducing the number of computer crashes and reboots users need to deal with, not to mention decreasing the number of patch updates they need to download.

The heart of the document, Black said, is five sets of approaches, tools and concepts that can help, all of which can be found in the document’s second section. The approaches are organized under five subheadings that, despite their jargon-heavy titles, each possess a common-sense idea as an overarching principle (see downloadable infographic).

These approaches include: using math-based tools to verify the code will work properly; breaking up a computer’s programs into modular parts so that if one part fails, the whole program doesn’t crash; connecting analysis tools for code that currently operate in isolation; using appropriate programming languages for the task that the code attempts to carry out; and developing evolving and changing tactics for protecting code that is the target of cyberattacks.

In addition to the techniques themselves, the publication offers recommendations for how the programming community can educate itself about where and how to use them. It also suggests that customers should request the techniques be used in development. “You as a consumer should be able to write it into a contract that you want a vendor to develop software in accordance with these principles, so that it’s as secure as it can be,” Black said.

Security is, of course, a major concern for almost everyone who uses technology these days, and Black said that the White House’s original request for these approaches was part of its 2016 Federal Cybersecurity R&D Strategic Action Plan, intended to be implemented over the next three to seven years. But though ideas of security permeate the document, Black said the strategies have an even broader intent.

“Security tends to bubble to the surface because we’ve got adversaries who want to exploit weaknesses,” he said, “but we’d still want to avoid bugs even without this threat. The effort to stymie them brings up general principles. You’ll notice the title doesn’t have the word ‘security’ in it anywhere.”

 

[Source:- SD]

Node.js 7 set for release next week

Node.js 7 set for release next week

The Node.js Foundation will release version 7 of the JavaScript platform next week. With the new release, version 6 will move to long-term support, and version 0.10 will reach “end of life” status.

Node 7, offered in beta in late September, is a “checkpoint release for the Node.js project and will focus on stability, incremental improvement over Node.js v6, and updating to the latest versions of V8, libuv, and ICU (International Components for Unicode),” said Mikeal Rogers, Foundation community manager.

But version 7 will have a short shelf life. “Given it is an odd-numbered release, it will only be available for eight months, with its end of life slated for June 2017,” Rogers said. “Beyond v7, we’ll be focusing our efforts on language compatibility, adopting modern Web standards, growth internally for VM neutrality, and API development and support for growing Node.js use cases.”

The release of a new version means status changes for older versions. Most important, users on the 0.10 line need to transition off of this release at once, since it will no longer be supported after this month, the Foundation said. There will be no further releases, including security or stability patches.The 0.12 release, meanwhile, goes to End of Life status in December.

“After Dec. 31, we won’t be able to get OpenSSL updates for those versions,” Rogers said. “So that means we won’t be able to provide any security updates. Additionally, the Node.js Core team has been maintaining the version of V8 included in Node.js v0.10 alone since the Chromium team retired it four years ago. This represents a risk for users, as the team will no longer maintain this.”

Version 6 becomes an Long Term Support (LTS) release today. “In a nutshell, the LTS strategy is focused on creating stability and security to organizations with complex environments that find it cumbersome to continually upgrade Node.js,” Rogers said. “These release lines are even-numbered and are supported for 30 months.”

Node v6 is the stable release until April 2018, meaning that new features only land in it with the consent of the Node project’s core technical committee. Otherwise, changes are limited to bug fixes, security updates, documentation updates, and improvements where the risk of breaking existing applications is minimal. After April 2018, v6 transitions to maintenance mode for 12 months, with only critical bugs and security fixes offered, as well as documentation updates.

“At the current rate of download, Node.js v6 will take over the current LTS line v4 in downloads by the end of the year,” Rogers said. “Node.js v4 will stop being maintained in April 2018.”

 

 

[Source:- JW]

 

New software helps to find out why ‘jumping genes’ are activated

Image result for New software helps to find out why 'jumping genes' are activated

The genome is not a fixed code but flexible. It allows changes in the genes. Transposons, however, so-called jumping genes, interpret this flexibility in a much freer way than “normal” genes. They reproduce in the genome and chose their position themselves. Transposons can also jump into a gene and render it inoperative. Thus, they are an important distinguishing mark for the development of different organisms.

Unclear what triggers transposon activity

However, it is still unclear how jumping genes developed and what influences their activity. “In order to find out how, for instance, climate zones influence activity, we must be able to compare the frequency of transposons in different populations — in different groups of individuals,” explained bioinformatician Robert Kofler from the Institute of Population Genetics at the University of Veterinary Medicine, Vienna. But this frequency has not yet been determined precisely.

New software for a low-priced method

Transposons are detected by DNA sequencing. But this detection cannot be carried out for every single member of a population. “At the moment, this would go beyond the available resources regarding finance and amount of work. The only — and much cheaper — option is to analyse an entire population in one reaction,” explained last author Christian Schlötterer. This method, which he has established using the example of fruit flies, is called Pool-Seq. It is also routinely applied to detect transposons. Existing analysis programmes, however, could not provide a precise result in this case. So far, each analysis has been biased by different factors such as the sequencing depth and the distance between paired reads.

For this purpose, Kofler developed the new software PoPoolationTE2. “If we sequence entire populations, each reaction provides a different result. The number of mixed individuals is always the same, but the single individuals differ,” explained Kofler. Furthermore, technical differences in the sample processing, among others, have influenced the analysis so far. PoPoolationTE2 is not affected by these factors. Thus, questions about the activity of transposons can be answered precisely for Pool-Seq reactions.

Interesting for cancer research

“The unbiased detection of transposon abundance enables a low-price comparison of populations from, for instance, different climate zones. In a next step, we can find out if a transposon is very active in a particular climate zone,” said Kofler. In principle, the bioinformatician has developed this new software for Pool-Seq. But as this method is also applied in medical research and diagnosis, the programme is also interesting for cancer research or the detection of neurological changes since transposons also occur in the brain.

Lab experiments confirm influencing factors

Lab experiments can indicate the factors influencing transposons. Last author Schlötterer explained these factors referring to an experiment with fruit flies: “We breed a hundred generations per population and expose them to different stimuli. We sequence at every tenth generation and determine if a stimulus has influenced the activity of the transposons. Thus, we can describe the activity of transposons in fast motion, so to say.” If the abundance is low, the scientists assume that the transposons are only starting to become more frequent. If a transposon reproduces very quickly in a particular population, this is called an invasion. If a jumping gene is detected in an entire population and not in another one, it could have been positively selected.

 

[Source:- Science Daily]