Ransomware attacks leave customers powerless, companies ignore cyber threat


Ransomware attack: The companies hit by the NotPetya got caught by an attack using the same vulnerability as WannaCry, because they still haven’t updated their systems.

As the “NotPetya” ransomware attack spreads around the world, it’s making clear how important it is for everyone – and particularly corporations – to take cybersecurity seriously. The companies affected by this malware include power utilities, banks and technology firms. Their customers are now left without power and other crucial services, in part because the companies did not take action and make the investments necessary to better protect themselves from these cyberattacks.

Cybersecurity is becoming another facet of the growing movement demanding corporate social responsibility. This broad effort has already made progress toward getting workers paid a living wage, encouraging companies to operate zero-waste production plants and practice cradle-to-cradle manufacturing – and even getting them to donate products to people in need.

The overall idea is that companies should make corporate decisions that reflect obligations not just to owners and shareholders, customers and employees, but to society at large and the natural environment. As a scholar of cybersecurity law and policy and chair of Indiana University’s new integrated program on cybersecurity risk management, I say it’s time to add cyberspace to that list.

Online security affects everyone

The recent WannaCry ransomware attack affected more than 200,000 computers in 150 nations. The results of the attack made clear that computers whose software is not kept up to date can hurt not only the computers’ owners, but ultimately all internet users. The companies hit by the NotPetya attack didn’t heed that warning, and got caught by an attack using the same vulnerability as WannaCry, because they still haven’t updated their systems.

Some policymakers and managers are taking notice around the world. In the U.S., the Department of Homeland Security, the chief federal agency dealing with cybersecurity, has highlighted businesses’ “shared responsibility” to protect themselves against cyberattacks. Consumers can’t protect their utility services, banking systems or even their personal data on their own, and must depend on companies to handle that security.

Cybersecurity is an effort that not only protects – and even benefits – a company’s bottom line but also contributes to overall corporate and societal sustainability. In addition, by protecting privacy, free expression and the exchange of information, cybersecurity helps support people’s human rights, both online and offline.

Vaccinating cyberspace

If more companies get serious about cybersecurity, the internet ecosystem will be safer for everyone. The concept is much like vaccinating people against disease: If enough people are protected, the others benefit too, through what is called “herd immunity.”

In terms of deterring hackers, the number of vulnerable targets will drop, making it harder for hackers to find them, and less worthwhile to even look. And more companies will have defenses ready when cyber attackers come calling. This isn’t a perfect solution: With enough time and resources, any system is vulnerable. But this change in corporate perception is an important step in developing a global culture of cybersecurity.

Customers can get involved in this effort, demanding better cybersecurity from companies they do business with. These can include online retailers, whether small specialized sellers or giants like Amazon. But local bricks-and-mortar stores with customer loyalty programs that have built their brands on trust can also be susceptible to consumer pressure.

To date, it’s been hard to know which companies have the best cybersecurity practices. The product and service reviewers at Consumer Reports have made a start: In March they started evaluating devices, software and mobile apps for privacy and cybersecurity.

The Conversation logo

Advocacy groups like the Internet Society and manyothers should ask companies to discuss cybersecurity efforts in their reports to shareholders. And they should urge government agencies to develop voluntary programs like the U.S. Environmental Protection Agency’s Energy Star appliance-efficiency rating system. The U.K. has a certification like this for cybersecurity, called Cyber Essentials. These efforts don’t require executives or managers to make different decisions, but help inform them – and the public – about how the choices they make affect consumers.

The ConversationUltimately, companies will play a huge role in shaping the future of our shared experience online. Cybersecurity and data privacy are key elements of this, and it’s time consumers demand corporations treat them as the 21st-century social responsibilities they are.


Petya ransomware cyberattack: What it does, how to protect your PC and more

Petya, Petya cyberattack, Global Cyberattack, Ransomware, Global Ransomware, Petya global ransomware, Petya ransomware attack, Petya attack, What is Petya, How to protect against Petya, Petya cyberattack, technology, technology news

Petya ransomware is part of a new wave of cyberattacks that has hit computer servers all across Europe, locking up computer data and crippling enterprise services. (Representational Image. AP)

Petya ransomware is part of a new wave of cyberattacks that has hit computer servers all across Europe, locking up computer data and crippling enterprise services in the corporate sector. Ukraine and Russia are the worst affected, though the attack has also impacted some companies in the US and other Western European countries.  So what exactly is the Petya ransomware attack, and how does it affect a PC? Also what exactly can one do to protect themselves against the ransomware? We explain everything you need to know.

What is Petya ransomware? What vulnerability is it exploiting it in the Windows system?

Petya is a ransomware, similar to the Wannacry attack. According to Security Research firm Kasperksy, Petya could be a variant of Petya.A, Petya.D, or PetrWrap. However, the firm doesn’t think this is a variation of the WannaCry cyberattack.

The post from Kaspersky also notes Petya is exploiting the same EternalBlue exploit that was used by Wannacry attack. The blogpost notes, “This appears to be a complex attack which involves several attack vectors. We can confirm that a modified EternalBlue exploit is used for propagation at least within corporate networks.

For those who don’t remember, WannaCry attack affected over 300,000 computers globally, and this one also exploited this particular security vulnerability in Microsoft’s Windows systems. Microsoft had issued a security patch to fix the ‘EternalBlue’ exploit in Windows 10, Windows 8,7 and even Windows XP PCs. The problem like with many of the Windows updates: people might not have applied the security patch or downloaded the update.

How exactly does Petya spread? What does it do to an infected computer?

Petya is a ransomware, and it follows WannaCry’s pattern. The ransomware locks up a computer’s files and demands $300 Bitcoins as ransom to unlock the data. All data on a computer, network gets encrypted.

This message is flashed on a computer, “If you see this text, then your files are no longer accessible, because they are encrypted. Perhaps you are busy looking for a way to recover your files, but don’t waste your time. Nobody can recover your files without our decryption service.

Petya, Petya cyberattack, Global Cyberattack, Ransomware, Global Ransomware, Petya global ransomware, Petya ransomware attack, Petya attack, What is Petya, How to protect against Petya, Petya cyberattack, technology, technology newsPetya is a ransomware, which is similar to the Wannacry ransomware and demands $300 in Bitcoins from users.

According to Kaspersky security team, in order get the credentials to spread, the ransomware relies on a custom tool called “a la Mimikatz.” This extracts credentials from the lsass.exe process, which is one of the crucial files in the Windows system. This stands for Local Security Authority Subsystem Service.

The attack is believed to have started against an update used on a third-party Ukrainian software called MeDoc, which is used by many government organisations in the country. According to reports, this is also the reason why Ukraine was the worst affect in the lot. Kaspersky says over 60 per cent of attacks took place in Ukraine, and Russia is second on the list with 30 per cent. But these are just the initial findings from Kaspersky.

Once the malware infects the computer, it will wait for an hour or so minutes, and then reboots the system. After the rebooting, the files are encrypted and a user get a ransom note on their PC asking them to pay up. Users are also warned against switching off their PC during the rebooting process, because it could make them lose their files.

As the Kaspersky blog points out, attackers want the Bitcoins to be paid and victims are asked to send the ransom to a particular address, and then the Bitcoin wallet id and personal number via e-mail to an address [email protected], confirming the transaction has been made.

So how can the ransomware attack be stopped?

The malware seems to infect the entire network, and known server names. According to Kasperky, “Each and every IP on the local network and each server found is checked for open TCP ports 445 and 139. Those machines that have these ports open are then attacked with one of the methods described above.” So yes, this is a fairly comprehensive cyberattack.

When it comes to decrypting files, currently there is no solution. According to the security researchers at Kaspersky, “the ransomware uses a standard, solid encryption scheme.” The firm notes that unless the hackers made a mistake, the data can’t be accessed.

Petya, Petya cyberattack, Global Cyberattack, Ransomware, Global Ransomware, Petya global ransomware, Petya ransomware attack, Petya attack, What is Petya, How to protect against Petya, Petya cyberattack, technology, technology newsWhen it comes to decrypting files, currently there is no solution. (Image source: AP)

So who is behind the Petya cyberattack? What all companies, countries have been impacted?

Researchers are still looking for who is responsible for this attack. But the impact of this is serious. In Ukraine, government offices, energy companies, banks, cash machines, gas stations, and supermarkets, have all been impacted, reports Associated Press. The Ukrainian Railways, Ukrtelecom, and the Chernobyl power plant was also affected by the attack.

Multinational companies like law firm DLA Piper, shipping giant AP Moller-Maersk, drugmaker Merck as well as Mondelez International, which is the owner of food brands such as Oreo, Cadbury, was also impacted. In the US, some hospitals have also been impacted by this cyberattack. Poland, Italy and Germany are other countries affected by the cyberattack. In India, the Jawaharlal Nehru Port has been impacted given Moller-Maersk operates the Gateway Terminals India (GTI) at JNPT. This has capacity for over 1.8 million standard container units.

So what happens now?

For starters, it seems the email address, which was being used by the hackers, has been suspended by the service provider. In a blogpost Posteo wrote, “We became aware that ransomware blackmailers are currently using a Posteo address as a means of contact. Our anti-abuse team checked this immediately – and blocked the account straight away.” Posteo also confirmed that it was no longer possible for the attackers to access the email, send mails, or access the account.

For now, users who have lost their data can’t really recover it unless they have a backup. There’s no way of getting the decryption key from the hackers, since the email account has been shut down. However, according to a tweet from HackerFantastic, when the system goes in for a reboot, the user should power off the PC. His tweet reads, “If machine reboots and you see this message, power off immediately! This is the encryption process. If you do not power on, files are fine.”

The problem with Petya is that right now researchers have no solution for decrypting these files. There’s also no way of stopping the attack from the spreading, given it exploits vulnerabilities in the network.

For users, it is best to keep a back up of all their data. Preferably this data should not be online, and it should be encrypted. Users should also not click on email links from suspicious ids or click on links asking for access to personal information. Also keep your Windows PC updated with the latest software.


Ransomware attack needs global solution

WannaCry is a computer worm, which locks up computers until a ransom is paid.

WannaCry is a computer worm, which locks up computers until a ransom is paid.

OPINION: Last weekend’s global ransomware attack, “WannaCry”, has raised many questions – from quirky economics questions such as “Is $300 just the right amount for a ransom?” to “Should we pay the ransom?” What we do know is that the malicious software has now spread to at least 150 countries, with reports of serious impacts on the National Health Service (NHS) in the UK, and a range of other government and private sector activity, including reported impacts on big companies like Telefonica, FedEx Corp. and the French car manufacturer Renault.

New Zealand appears to have avoided the worst effects so far, with Russia, India, Spain, Taiwan and Ukraine some of the most affected countries. The computer worm, which locks up computers until a ransom is paid, is expected to infect millions more computer systems in the coming days through newly emerged variants, and will likely cause direct and indirect costs running into the billions of dollars globally.

There are some basic but important lessons that we can learn from the attack, even at this early stage. First, there are still many computers using Windows XP, which was discontinued by Microsoft since 2014. Discontinuation means that in the time between 2014 until now, there have been no security updates to patch the holes discovered in the system – leaving open doors for hackers to enter into. What is truly scary is that 95% of ATMs in the world are still running Windows XP. Imagine the field-day the hackers can have when they can access the ATMs. In fact, the late Kiwi hacker Barnaby Jack demonstrated how to spew cash from the ATMs through a simple hack at the hacker conference Black Hat 2010. In the last few days as a global response to the attack, Microsoft (finally) released software updates for all old Windows systems including Windows XP and Windows 2003.

Cyber security researchers say North Korea might be linked to the WannaCry ransomware cyber attack that has infected ...

Cyber security researchers say North Korea might be linked to the WannaCry ransomware cyber attack that has infected more than 300,000 computers worldwide.

This brings us to the second point. When it threatens their business reputation, software companies can offer software updates to very old software just to save the day (and their reputation). Microsoft has successfully demonstrated their true capability in doing just that.

Third, despite the importance of data, many people do not back up their data in external hard drives, cloud computing environments such as Dropbox, or other computers. This basic human flaw has been a huge enabler of these types of ransomware attacks.

Fourth, attributing attackers is a difficult problem in cyber space. That said, the WannaCry hackers may not be as sophisticated as the original writers of the USA National Security Agency (NSA) cyber weapons this ransomware is based upon. When the vigilante group Shadow Brokers released the leaks about these NSA cyber weapons, it was only a matter of time before some malicious party modified this software into a malware. This has happened in the past with other global attacks such as the Blaster worm more than a decade ago.

In WannaCry, hackers left behind a few trails, such as a URL which serves as a kill-switch to stop the spread of the ransomware, and patterns which shows a certain style of software coding. Some researchers from Google suspect that this is linked to North Korea, due to the coding style bearing similarity to the notorious Lazarus group, responsible for hacks into South Korea (2013 DarkSeoul operation) and the Sony Pictures hack in 2014.

These issues aside, the crisis demonstrates the dangers posed by a growing tendency in national security establishments to develop “cyber weapons” that can be used to disrupt and destroy computer systems, and the corresponding need for enhanced global co-operation on cyber security threats.

Since the terrorist attacks on 9/11, the NSA has taken a lead role in developing offensive cyber weapons to deploy against foreign adversaries. This has been widely revealed through leaks by WikiLeaks, Edward Snowden and others. The problem is these capabilities can be hacked themselves. As with most weapons, they can also be used back against us and we witnessed this irony in WannaCry.

The proliferation of malicious cyber tools from states to non-state actors is as much of a danger as the collateral damage that malicious software can cause. The malware used in the infamous Stuxnet attack against Iranian nuclear centrifuges, likely by the US and Israel, spread to more than 60 countries and is still being modified and used for malicious purposes. It is very difficult to isolate targets when using offensive cyber capabilities without that malware spreading and/or being reverse engineered. Those states that are at the cutting edge of developing malicious cyber tools should expect to become more likely targets for hackers themselves.

Another major problem, which can only be solved by global cyber co-operation, is information sharing. In the case at hand it appears that the NSA knew about the existence of the Windows software vulnerability that has been so ruthlessly exploited, but did not disclose that information until it was too late. The need for governments to share information with the private sector and vice versa also happens too slowly in many cases. This lack of trust and transparency has been a feature of the way cyber security has been dealt with and has precluded effective cross-sector responses to cyber security issues. A global information sharing platform may be needed to immunise the impacts of these types of cyber attacks.

A final problem is the lack of global investment in cyber security in both the government and private sectors. The political row that has erupted in the UK over investment in NHS digital infrastructure is noteworthy in this context. When public sector organisations are starved of funding there is little incentive to invest in upgraded software and hardware. If some NHS computers were not operating on outdated Windows XP operating systems then the effects on the NHS’s ability to keep frontline services running, including X-ray and chemotherapy services, might have been less severe.

The New Zealand government has done a great job in that respect by recognising the need for sustained funding for cyber security research such as STRATUS ) and has taken big strides in recent years in enhancing our own cyber security capabilities and institutions, including the recent establishment of our own national Computer Emergency Response Team (CERT NZ). However, these kinds of cyber attacks cannot be dealt with by countries working in isolation. The global ransomware attack demonstrates a pressing need for global solutions.


Why the solution to ransomware may be predictive analytics

Ransomware is quickly becoming a problem for both businesses and personal computer users. It is also becoming more sophisticated; users may not even have to click or download something to become victim to this scam. Ransomware can spread between networked PCs and servers quite quickly, which leaves the owners of these machines at the mercy of hackers demanding money for access to the owner’s’ most valuable files.

As these threats to steal information and hold technology hostage become more real, so does the need for a way to stop such attacks. Predictive analytics may be the key to preventing ransomware from getting a hold on computers and servers. Here are several reasons why predictive analytics may be the most essential tool to use when trying to avoid ransomware.

Predictive analytics have a good track record

It may be a new tool in the fight against hacking, but predictive analytics have actually been used by the U.S. military to combat cyber threats and to make real world decisions. They have used predictive analytics to help them dive through massive amounts of data in order to make better and more informed decisions about the possible tactics they will use.

This data can include things like past military engagements, social and economic factors and a myriad of other information. Diving through this data by hand would take an incredible amount of time, but with predictive analytics, they have been able to drill down to the most essential predictive data. The military has used this information to prepare for eventualities they may not have seen coming otherwise, which in turn helps avoid costly mistakes.

Predictive analytics stops the problem before it starts

Traditional antivirus programs treat often the problem after it has already infected your computer. Many of these programs do have some preventive options, but they often rely on their users to avoid viruses and hacking programs.

As malware and ransomware evolve, avoidance is becoming trickier, since some ransomware is able to infect a computer without the user making grave errors.  Though human intervention may still be required, predictive analytics can empower users and companies with the information they need to steer clear of the potential threats that ransomware can pose.

It can be used on large, complicated systems

Though ransomware can affect individual users, this kind of malware tends to target larger collectives of users, like companies or parts of the government. The larger the network, the harder it can be to protect that network against cyber threats. Predictive analytics can easily be used on large networks, because it can be modified to dig through large amounts of data. Once that data is analysed, it can be turned over to IT employees and from there, who can further analyse the essential data and ensure that their company or department is safe against ransomware and other threats.

The cost of this technology has also gone down over the last several years, like with converged systems and the hybrid cloud, so it has become a much more cost effective way to deal with problems like ransomware. This decrease in cost has also caused more cyber security companies to start adding predictive analytics to the list of solutions they offer their clients.

Ransomware is evolving constantly

10 years ago, few people knew what ransomware was. Now it is not only making real world headlines, but has also been the main plot point in many TV shows and movies, making it a household name. As people try to stop its effects, hackers using this technology will ensure that ransomware evolves to overcome such defences. Most current cyber defences are mato defeat current versions of ransomware, but they are not made to defeat evolved versions of it.

Predictive analytics is able to account for this evolution. It can be used to predict what sorts of evolutions may occur and then those predictions can be used to avoid the next-generation of ransomware or other malware. It is also very likely that as other preventative software evolves to stop things like ransomware, that predictive analytics will continue to be a part of these steps to ensure cyber safety.

Though ransomware may not be going away anytime soon, we may see its negative impact lessening as more government departments and private companies invest in solutions that include predictive analytics.

By using predictive analytics to explore exploitative eventualities, companies can solve more complex problems that threaten the security of their computers and the security of their networks.


Enhancing the Power of Elastic Email via CRM Integration

Elastic Email Integration

Elastic Email is a powerful email platform that can help improve email marketing campaigns by easily creating newsletters and sending email more efficiently. However, it still needs people to create or update marketing lists, process unsubscribes in a CRM system and create and send campaign reports for analysis. This takes time, is error prone and is an unnecessary employee cost. By integrating Elastic Email with CRM systems it is possible to remove the costly administration from email marketing.

Synchronising marketing lists and unsubscribes

Contact lists are a vital component of marketing campaigns and therefore need to be managed and updated on a regular basis. If your business uses a CRM system to collate and manage these contact lists then updating these manually in Elastic Email will be a costly, employee intensive process.

TaskCentre can automatically synchronise your marketing lists between Elastic Email and your CRM system by a scheduled or database event. It will also write back all your campaign unsubscribes and hard bounces to your CRM system that have been encountered by Elastic Email.

Full email marketing automation to business rules

From time to time you might need to run ‘unplanned’ email campaigns. Factors such as slow moving stock or pressures to cross sell/up sell all mean more email campaigns need to be processed by the marketing team. Yet, finding the time to run these unplanned campaigns can be difficult.

TaskCentre can automatically create and send Elastic Email campaigns based on data events you define e.g. slow moving stock. It will also update your CRM application with the results.

Automating campaign report distribution

Once an email campaign has been sent you will probably need to generate a report detailing the open and click through results. This report will then be required by the salesteam to update the CRM system and set up follow-up activities. More administration for you and your company to absorb.

TaskCentre can automatically create and distribute open and click through reports and dynamically update CRM systems. Removing this administration allows your sales team to focus on the primary objective of sales.

The business benefits of integrating Elastic Email with your CRM solution include:

  • Removal of time consuming bi-directional data entry
  • Eradication of the risk of sending inappropriate communications to contacts whose statuses have changed in one application (CRM) but not your other systems (Elastic Email).
  • Improvement in employee productivity

If you want to find out more about Elastic Email integration or have any questions about what business process automation and application integration can do for your business call us on +44 (0)330 99 88 700.



[Source:- orbis-software]

Achieving Operational Efficiency via Workflow Automation

Operational Efficiency via Workflow Automation

Operational efficiency is the capability of an organisation to deliver products or services to its customers in the most cost-effective manner possible while still ensuring the high quality of its products, service and support. Unsurprisingly, improving operational efficiency is a fundamental objective for the majority of businesses.

The main contributing dynamic to operational efficiency is workflow. It’s therefore surprising how many organisations still depend on a large amount of manual processing, using legacy or siloed systems, paper-based forms and excel spreadsheets, rather than automating these mundane tasks that underpin the smooth running of a business.

Automating Workflow

Workflow automation is about streamlining and automating business processes, whether it is for finance, sales & marketing, HR or distribution. Deploying workflow automation to each department’s everyday business processes via will reduce the number of tasks employees would otherwise do manually, freeing them to spend more time thinking about value-added tasks.

This essentially allows more things to get done in the same amount of time and will speed up production and reduce the possibility of errors. Tasks simply cannot get ignored or bypassed with an automated workflow system in place.

If the right processes are established then every employee will know what is expected of them and what they are accountable for. Any deviation will be escalated to management via a notification. In fact, management can benefit from being able to see exactly what is going on from a macro level, rather than having to request updates and reports.

If there is a bottleneck in the workflow then managers can make an informed decision and act on it immediately. With the ability to measure workflow, businesses can also understand where it is possible to improve.

Workflow automation can also help organisations maintain standards and compliance by configuring the workflow to make sure all essential activities and outcomes are tracked and escalated. Aligning workflows with policy makes it straightforward for users to comply.

TaskCentre’s Workflow & Human Interaction Capabilities

TaskCentre can integrate all business systems within an organisation. The Workflow & Human Interaction capability can then provide organisations with a powerful and flexible workflow automation solution that ensures business rules are adhered to and administration is removed.

It permits users to receive and authorise multi-level workflow jobs, regardless of device and business system the workflow starts or ends within, and creates a 100% audit trail for complete piece of mind.

Here are a few examples of processes that can benefit from workflow automation:


  • Bank account management
  • Invoice processing
  • Governance and compliance


  • Automate lead follow-up notifications
  • Trigger alerts regarding high-quality leads
  • Notify account manager about contract expirations


  • Upsell, Cross-sell and Cycle-based sell opportunities
  • Follow up shopping cart abandonments
  • Create automated email responses for enquiries


  • Supplier and contract management
  • New product development
  • Procurement and work order approvals

Getting the Best ROI from Workflow

Adding workflow capabilities to your system will revolutionise your business, but it is important to get workflow automation correct from the start otherwise it will only create problems in the future. Organisations need to understand what inefficiencies and business processes they want to address and make sure that everyone involved in the various workflows contributes to their design. If something is missing then the workflow won’t work.

Once it has been created and tested the workflow ideally needs to be documented and communicated to the users. Human interaction will be required at some point in the workflow, so employees need to know what is expected of them. Additionally, if someone leaves the organisation having the workflow documented will enable the next person to pick up the process very quickly, thus not adversely affecting the process.



[Source:- orbis-software]

Exception Reporting: Automating the exception management process

Exception Reporting

Exception reporting is critical for businesses to quickly identify and address key business process issues before they become a problem. Automating the exception management process ensures that potential issues and the manual monitoring of data is removed.

It’s not uncommon for organisations to have an employee to manage exceptions by monitoring systems for data anomalies and potential issues. This approach to exception reporting can be an extremely risky and costly avenue to take. So, how are businesses overcoming the risks associated with exception reporting?

Automated exception reporting

Automating exception reports enforces best business practice and removes the manual monitoring of critical data to ensure that potential issues are avoided. Removing the associated costs and the risk of human error through automated exception handling can be achieved through TaskCentre, which enables:

  • Exception reporting via email or SMS
  • Automated exception reporting with attachments
  • Exception reporting with added workflow functionality

TaskCentre provides the ability to monitor and generate management exception reports on critical business processes. This functionality allows organisations to make informed decisions on exception based events immediately and can automatically create and distribute detailed exception reports by scheduled or database event for the following formats:

  • Crystal Reports
  • Microsoft Reporting Services (SSRS)
  • MS Word
  • HTML

TaskCentre users, such as Morgan Motor Company Ltd use TaskCentre to perform event-based, management exception reporting. Automated exception handling in-real time helps them to identify cars that have been sold below a specific margin or when a new part is created. Depending on the information needed by a team or individual, emails and SMSnotifications are personalised to the users exact requirement.

“TaskCentre saves us having to check data manually because it will automatically send an email to say this has or hasn’t happened, depending on the criteria that has been designated.”
Caroline Gudgeon, System Implementation Team, Morgan Motor Company Ltd

Exception handling using TaskCentre

TaskCentre’s drag and drop Tools enable organisations to streamline business processes for the exact business requirement. TaskCentre is based upon Tasks and Tools. Tasks perform as part or all of a technical or operational process which is triggered by one or more Event or inbound/outbound SMTP email. Tasks can contain any number of steps within a business process which, in turn, are created with easy to use, drag and drop Tools.

The TaskCentre Tools expose and consume information to and from each other in any combination which can be easily expanded as and when required to meet business needs. TaskCentre Tools are used to create steps within a Task which allows any number of database structures to interact. As a result, the Tools are then joined together to build the automated exception report. TaskCentre Tools can be grouped into six categories which are:

  • Event
  • Input
  • Format
  • Output
  • Execute
  • General

Click here for a detailed explanation of TaskCentre Tools.

Exception handling example using Crystal Reports

TaskCentre’s ‘Run Crystal Report Tool’ is a Format Tool which is used to create a Task Step to automate the running of Crystal Reports. Exception handling examples can include locating accounts in a database which have exceeded their credit limit, running a statement for a list of inventory sold, or producing an order acknowledgement for new orders.

When executing an exception report to identify accounts within a database which have exceeded their credit limit, the customer account number will be directly mapped to a parameter field within the Crystal Report. Crystal Reports will then use the customer account number to complete a database query of its own. Data can be extracted from anInput Step such as a Database Query (ODBC). This enables Crystal Reports to obtain additional information on the customer account in question to complete the report. The number of accounts which have been located by the TaskCentre query dictates the number of exception reports that are generated.

Each document exposed in the Step can then be delivered by using Output Steps such as Send Message (SMTP), File Transfer (FTP) or Save as File. As a result, the exception report can be used to present management information which can be distributed by email (as an attachment), SMS or published to form the content of a web portal or intranet.

Where a report requires access to data tables that have specific security associated with them, all relevant login details can be passed by the Step to the report concerned. The drag and drop process can be seen in the image below.


TaskCentre Tools used in this process:

  • Event > MS SQL Server Trigger Tool
  • Input > Database Query Tool (ODBC)
  • Format > Run Crystal Report Tool
  • Output > Send Message (SMTP)

For more information on running a Crystal Report using TaskCentre read this article.

Exception reporting with Workflow

Exception based reporting consists of four distinct phases. These can be identified as:

  • Identification
  • Evaluation
  • Action
  • Review

Unfortunately, many business applications lack the functionality to perform the action and review phases. TaskCentre not only provides the ability to address the identification and evaluation phases through its Notifications and Alerts and Document Automation Tools, but its wokflow functionality also allow businesses to create unique employee workflows and report on process closures.

In many businesses, workflow remains a manual driven process where authorisations are unique to departmental structures and business rules. Controlling workflow processes by implementing workflow automation ensures that business rules are adhered to whilst removing unnecessary administration tasks. Below is a small selection of common automated workflow processes that have been created by TaskCentre users.

  • Expense approvals
  • Single and/or multi authorisation workflows
  • Authorisation of Purchase Orders
  • Price change approvals and discount authorisations

Benefits of automated exception reporting

Automating management by exception processes ensures that data is provided to the right people at the right time. Exception reporting through automation reduces the risk of relying on employees to monitor and report on business exceptions. It also provides detailed management exception reports and related workflow tasks via email or SMS. The common benefits TaskCentre users experience through automated exception management processes include:

  • Reduced exposure to financial risk
  • Enforced best practice and procedures
  • Consistent and detailed information to support exception reports and decision making
  • Risks associated with employees spotting exceptions removed
  • Automated business processes beyond exceptions identified


[Source:- orbis-software]

Ensuring availability during the summer season

Image result for Ensuring availability during the summer season

Richard Agnew, VP NW EMEA, Veeam, looks at delivering 24/7 availability.

Summer is here, and many are getting ready to take a well-deserved holiday. However, this does not mean that expectations of continuous access to applications, services and data should be lowered.

A modern business relies on delivering 24/7 availability, regardless of employee holidays. But what happens if the system breaks down during the week that a corporate IT manager has gone away? It will take longer than usual to get systems running again, and that in turn will impact corporate revenue and reputation.

In order to avoid this, there are three simple precautions that businesses should take during the summer holiday season to ensure that corporate applications, services and data remain continuously available.

Avoid downtime
It is no longer the case that planned or unplanned downtime will not have a direct impact on vital services, whether it is revenue or reputation. According to the 2016 Veeam Availability Report, the average cost of downtime for mission-critical applications is $100,266 per hour in the UK specifically, and 59 percent of respondents revealed their organisations’ applications encounter unplanned downtime caused by IT failures, external forces or other factors, up to ten times per year. Although employees are made aware that the system will be down for a period of time, this may still have a negative impact on productivity, profitability and workflow. A modern business requires constant and reliable data availability – especially during the holiday period when staff levels are lower.

Delete unnecessary data
Garbage data is a recognised problem, and one that can have the biggest impact on a firm’s availability. Data like this eats up resources in the data centre, and can cause poor performance and system errors. To maintain high availability, it is essential to keep garbage data under control. Common culprits are installation files duplicated at several locations, as well as virtual machines that are invisible because they have been removed from the warehouse, but not permanently deleted.

It’s easy to keep unnecessary amounts of garbage data when nobody knows what it is, and no one wants to delete it in case it’s something important. This method of keeping useless data is a legacy from the days when data protection and availability solutions were much less sophisticated, and restoring lost data was a cumbersome and difficult process. Today, data recovery is much quicker, allowing you to recover what you want, when you want. Whether you have lost a backup copy of an important piece of data or unintentionally deleted some garbage data, it is much easier to restore, usually within seconds.

Have procedures in place before the holiday season
Another equally important issue is that data recovery for any application requires spending one of the most valuable resources – time. The average downtime of critical applications in the IT systems of UK companies is five hours, an extended time for an organisation to be offline, when it could be as low as fifteen minutes. To ensure that services, applications and data are available throughout the holiday season, it is not only IT solutions that must be put in place, but also routine. Planning for restoring data in the fastest and easiest way when a problem has arisen is essential if we are to avoid unnecessary downtime and loss of corporate revenue.

Availability is as important during the holiday season as any other time, and downtime remains costly no matter what time of year it occurs. In today’s digital society, end-users are expecting organisations to be Always-On and available. Unfortunately, the average number of failures in modern enterprises is still high. According to the 2016 Veeam Availability Report, 84 percent of senior IT decision makers across the globe admit to suffering an ‘Availability Gap’ between what IT can deliver, and what users demand. This gap costs $16 million a year in lost revenue and productivity, in addition to the negative impact on customer confidence and brand integrity (according to 68 percent and 62 percent of respondents, respectively). This cost only increases as more time passes, and unless procedures are put in place before the holiday season, there is a high risk of unnecessarily long downtime and high revenue loss.

Is your data centre ready for its summer holiday?



[Source:- CBR]

A picture tells a thousand words: Visualising the mainframe for modern DevOps

Image result for A picture tells a thousand words: Visualising the mainframe for modern DevOps

Steven Murray, Solutions Director at Compuware, looks at the challenges of the mainframe in the app economy.

From shopping online, to scrutinising our bank balances, digital technology has become ingrained in everyday life, making applications the lifeblood of today’s organisations. Consumers expect these digital services to work seamlessly, which has driven businesses to adopt more intelligent approaches to developing and maintaining them, such as DevOps. These modern approaches enable IT teams to work more closely in order to deliver flawless digital services and updates in much shorter timeframes than have previously been possible.

However, although most of us use these digital services on a daily basis, very few realise that as we live, work and play online, the app economy is in large part underpinned by the mainframe. Despite having been around for over 50 years, even today the mainframe is responsible for crunching the numbers and processing over 30 billion transactions that power the digital economy every single day. Mobile banking, for example, relies heavily on a string of complex digital services that draw data from the mainframe, despite the core service visible to the consumer being delivered through a flashy new modern app. It’s therefore unsurprising that 88% of CIOs say the mainframe will remain a core business asset over the next decade.

The mainframe of the digital economy

Despite its central role in supporting digital services, very few modern developers have experience working on the mainframe, and even fewer understand the complex interdependencies that exist between these legacy systems and distributed applications. This is in part due to the isolated environment that specialist mainframe developers have historically worked in; the very same secluded working environments that DevOps encourages companies to move away from. As mainframe developers worked in silos, independent of others, the newer generations of developers were alienated from the platform and had little opportunity to learn from their more experienced colleagues. This has created a shortage of skilled mainframe developers as the older generations continue to reach retirement age.

In an increasingly interconnected IT environment, this skills shortage is hindering DevOps initiatives. If developers aren’t aware of the detrimental impact that a single update to one application can have on the wider ecosystem, they’ll find it nearly impossible to deliver error-free digital services and updates as quickly as they’re required to. So how can businesses keep up with rising consumer expectations and enable programmers with little to no mainframe experience to deliver flawless updates to applications on any platform?

Enter the polyglot programmers

First and foremost, companies need to work towards enabling their developers to work interchangeably on any IT project, regardless of the platform or programming language behind it. As most developers have little experience working on the mainframe, companies need to provide them with modern tools and development interfaces that are more familiar to them and can be used across any IT platform. Importantly, mainframe tools must be integrated with popular and familiar open source/distributed DevOps solutions so that developers can use the same tools for COBOL, Java and other languages.

Having one modern interface and a common set of tools across all platforms will help to unify agile software development lifecycle workflows by enabling programmers to switch seamlessly between tasks regardless of the platform and bringing the mainframe into the fold of mainstream IT. With this approach, the mainframe becomes only different in syntax where languages like COBOL are just another programming language for developers to learn.

However, enabling developers to update mainframe applications is only half the battle. They must also find a way for developers to more intuitively understand the complex interdependencies that exist between the applications and data sets that are integral to the services they’re delivering, without the need for digging through incomplete documentation or acquiring the same specialist knowledge that took their veteran colleagues years to acquire. Rather than expecting developers to manually trace the complex web of interdependencies between applications and data, modern visualisation technologies can do the legwork for them. Having the ability to instantly visualise the relationships between digital services, as well as the impact any changes they make have on the wider ecosystem, will enable developers to update mainframe applications with confidence that there won’t be any unforeseen consequences.

Mainstreaming the mainframe

Ultimately, education is needed to convince non-mainframe programmers that there is a bright future for these hugely reliable and powerful mainframe systems. To encourage this, businesses need to integrate the mainframe into mainstream cross-platform DevOps processes. The easiest way to achieve this is if the same interfaces and tools can be used across all platforms. This will also encourage collaboration, eliminating mainframe silos and promoting cross-platform DevOps, which should make programming on the mainframe easier and more intuitive to developers who usually work on distributed applications.



[Source:- CBR]

Improving Business Outcomes in a Networked World

Image result for Improving Business Outcomes in a Networked World

Leo McCloskey, Chief Customer Officer for Actual Experience, on the importance of digital quality experience for the success of businesses.

Global acceptance of digital commerce, cloud, managed, and networked services and the resultant need to rapidly conduct business across a constantly changing landscape of providers, applications, and services has businesses scrambling to keep up.

Businesses are transforming into smart users of connected devices, applications and technologies and yet the metrics that are used to monitor performance of call centers and retail outlets are not being captured for on-line systems and automated services that perform those same functions.

As each new service or application is launched, monitoring and management solutions are added to ensure the quality of the digital service, and while that is absolutely necessary, there are visibility gaps being created between the experience that the user sees and the infrastructure that these systems affect. That gap is preventing the capture of important digital experience metrics and the business is being adversely affected.

Attempts to align the views from individual monitoring systems to understand the end-to-end performance and quality of digital services only widen the visibility gap and create blind spots that become more prominent and more difficult to minimize as more of the business goes digital. The tools available to track and improve the health of the underlying infrastructure and services are critical, but those tools are meant to manage and improve the digital experience – not measure it.

The measure of end-to-end performance of networked systems for customers, employees, and partners is known as digital experience quality. Understanding the quality of a digital experience can improve employee productivity, help IT solve problems faster, and keep customers from leaving.

You Don’t Know What You Don’t Know
Businesses are awash with data; yet measuring digital experience quality remains difficult. Executives at every level are investigating and investing in strategies to improve access to and analysis of this data, yet the return on that investment remains elusive.

Business leaders overwhelmingly believe that digital experience quality is important to their customers and employees, yet only 22% believe they are actually delivering a consistent experience.

Based on a survey of 403 business leaders conducted for Actual Experience, the strategic focus for many companies looking to understand and improve the quality of the digital services they use and provide includes three things.

57% Data and analytics – More than a trend, businesses of all types are embracing the possibilities created by complex analysis of large quantities of data. When it comes to managing the applications and infrastructure used by customers and employees, the business is only as responsive as the data allows. Having the right metrics and analytics in place to support decision making is a top priority for executives as they determine budgets and identify priorities for the business.

51% Quality – If the systems don’t work, the business doesn’t work. Every hour of every day there are critical connections with suppliers, customers, partners, and employees that depend on the quality of the applications, systems, and networks being used. And many of those applications, systems, and networks reside outside the business. Top priorities are having digital interactions that work consistently well, combined with the ability to make continuous quality improvements.

42% Culture – Understanding the end-to-end experience is only possible when the whole business is joined up and focused on customer and user experience. Creating that culture starts at the top, but executing on that culture requires reliable and capable management systems to deliver an end-to-end view of the digital interactions that support daily operations. While 78% of C-level executives believe they are responsible for digital quality, less than half believe they can identify the specific quality issues that need to be improved.

While core functionality may be solid, the ability to navigate the visibility gaps between management systems and create a seamless end-to-end view of what users are experiencing is missing. Metrics that describe the performance of each component of the user transaction are valuable, but without a reliable measure from the user’s point-of-view, the picture is fragmented and far from clear.

Improving Business Outcomes
When serving customers, executives want the experience to be fast, reliable, and frictionless. Measuring and analyzing all the pieces of the customer or user journey is a daunting task. There are numerous connections to dozens of things that can go wrong all along the way. At any given time, a transaction can be disrupted by a wide variety of factors that might be a one-time occurrence or an indication of a larger problem. How do you know?

Establishing a digital user and executing a process the way they would, constructs an outside-in, end-to-end measurement of what happened, when, and whether it matters. Doing this for multiple processes enables rapid response to problems and, over time, measuring digital experience quality reveals performance bottlenecks that show where technology investment would add the most business value.

Something as minor as spotting a slow responding timekeeping system can improve productivity and reduce the frustration of employees who should have more important things to do. Improving the systems used by those closest to the customer ultimately improves the customer experience as well.

From the first customer inquiry to the final delivery of products and services, there are multiple systems and providers that make up the digital experience. Understanding the view from the outside-in for every type of user is digital quality experience and that measurement is critical to your business and your brand.



[Source:- CBR]