The hype around edge computing may be at its zenith – at the peak of Gartner’s “hype cycle”.  A cursory search on Google and you can find the following statements:

Edge Computing Buzzwords

The assertions are that edge will be the next multi-billion-dollar opportunity that will make industries massively more efficient.  It’s an amazing opportunity! 

The problem is we don’t seem to know exactly what it is. These articles – and plenty more – attempt to demystify the edge, clarify its meaning, and even state the characteristics of edge data centers. I witnessed the confusion in person at an event earlier this year when every single speaker presented a different definition of edge. And, I don’t find definitions like the ones below – edgy, edgier, and edgiest – to be exceptionally helpful:

many views on hybrid IT architectures

Getting clarity on edge computing

Part of the problem is that people in the industry talk about edge computing as if it’s new.

It’s not.

We believe the first instances of the edge date back to the 90s when Akamai introduced its content delivery networks. Then came the term cloud computing in 2006, and Cisco introduced fog computing in 2012. If we stipulate that 2012 was beginning of edge, that implies we’ve been working on it for six or seven years. And during that time, the hype surrounding edge has ballooned and, in my opinion, kicked into high gear in 2018.

Regardless of this hype, what seems certain is the new hybrid computing architecture will require a more robust edge infrastructure. Users are now asking how fast an app will load on their device, not just if it will load, and they expect responsiveness.

At Schneider, our popular white paper Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge questioned if the local edge was becoming the weakest part of your ecosystem. The paper theorizes that the future will consist of three types of data centers: cloud; regional edge; and local edge, which is the first point of connection (for people and things) to the network. Think of it as yesterday’s server rooms and wiring closets are becoming tomorrow’s Micro Data Centers.
[embedded content]

Since the white paper’s publication, Schneider research shows our resiliency theory is proving to be true. That’s why we, as an industry, are having so many conversations about how to keep the availability of the local edge as high as necessary.

So, we need to ask: what do we pragmatically have to do as an industry to overcome the challenges that the local edge presents?

For the hype around edge computing to become a reality and deliver on the promise that the edge holds, our industry needs to improve in three key areas. 

  1. The Integrated Ecosystem

To deliver this solution, our industry must work together in ways that we haven’t had to in the past. It’s a transformation in how we operate to deliver a solution to a customer.

And, transformation doesn’t happen overnight.

Physical infrastructure vendors must join forces with system integrators, managed service providers, and the customer on innovative systems that are integrated and deployed on-site. All of it needs to come together with a thorough understanding of the customer application and be delivered at multiple locations worldwide, leveraging existing staff. This is part of the challenge.

We believe the solution is a standardized and robust Micro Data Center that can be monitored and managed from any location. We’ve been doing a lot of innovative work with HPE including integrating our supply chains. We’re also working with Scale Computing, StorMagic, and others. We just announced expanded Cisco Certification of the entire NetShelter product line, certified to ship with Unified Computing System or UCS inside of them. We are making progress as an industry.

  1. The Management Tools

Imagine you’re on the links at St. Andrews, poised to play 18 holes, and in your bag instead of clubs you find a rake and shovel just like Kevin Costner’s washed up golf pro Roy McAvoy in the classic 90s film Tin Cup. In the same way that Roy doesn’t have the right equipment, the management tools we have are inadequate to address the challenges at the edge.
[embedded content]
One data center operator may have 3,000 sites dotting the globe with multiple alarms per site per day and no on-site staff. It’s easy to understand how they would get overwhelmed very quickly. This can become an almost unmanageable problem.

Management tools must move to a cloud-based architecture. This will allow thousands of geographically dispersed edge sites to have the same level of manageability that we provide for large data centers. With a cloud-based architecture, you can pay as you grow and start with what you need. It’s easy to scale, upgrades are automatic, and it has up-to-date cybersecurity. Most importantly, this approach enables access from anywhere, at any time, from any device.

In a traditional world, the data center operator with thousands of sites is probably used to fielding calls from non-IT staffers that go like this:

“I’ve got a problem. This thing is beeping!”

“What’s the problem with your UPS?”

“UPS? No, we use FedEx.”

“The UPS is your battery backup. It has an alarm.”

 “I don’t know anything about that but this thing is beeping!” 

And it can go on and on.

In a cloud-based architecture, multiple players in the ecosystem can see the same data and work from the same exact dashboard at the same time. When everyone can see the same data at the same time, it eliminates this conversation. And, very soon we will be able to manage this process holistically as opposed to a collection of individual devices.

  1. Analytics and Artificial Intelligence (AI) to Augment Staff

The promise of analytics and machine learning is fantastic but we still have a lot to learn. The image below of headlines summarizes a cursory online search about how to deploy AI. Let’s see . . . we have 4 training steps, 5 baking steps, 6 implementation steps, 7 steps to  successful AI, 8 easy steps to get started, then there’s 9 baby steps, 10 learning steps, 11 rules, and don’t forget the 12 steps to excellence.

No rational human can make sense out of this.

The 4-12(?) steps to deploy AI

Of course, I will be the first to admit that we may not be helping because we decided to introduce our own approach. We reject that you need steps and rules. We believe you need four ingredientsWe started work on this two years ago and we’re bringing it to the market now – I think we’re on the right track. The four ingredients are:

  • A secure, scalable, robust cloud architecture. It may sound easy but when you tell software developers to stop making on-premise software and instead make software architected for the cloud, it’s not just a technology change. It’s a change in the way they have to think.
  • A data lake with massive amounts of normalized data. (Of course, how you normalize the data and knowing what data to collect is another challenge.)
  • A talent pool of subject matter experts.
  • Data scientists to develop the algorithms.

It is our experience that once you have these ingredients, which provide a solid foundation, you can start doing something interesting. You can become more predictive and help data center operators know when there’s a problem before it occurs.

The state of edge computing

This is the state of edge computing as I see it, presented with intermittent sarcasm but without hype and hopefully without confusion. For more insights on this topic, I encourage you to check out the Schneider Electric blog directory and our white papers and find out about EcoStruxure IT.

With growing consumer demand for more access to trusted, reliable information around their choices, mass retailers and “e-tailers” are using innovative technologies, such as digitization, connected objects and mobile apps, to meet their customers’ needs.

According to an IDC report published in 2017, the production of data will increase tenfold by 2025. Gathering data from multiple sources and sending relevant information to the right people is becoming a headache for manufacturers. They need to send this data in increasingly short deadlines, with near real-time becoming the norm in e-commerce. Added to that, the World Economic Forum, states that online sales will grow from around 10% today to 40% in 2027!

In order to share consistent information across the enterprise, product data needs to be dynamic, merged, centralized and up to date. If not done properly the sharing of information is very inefficient, unreliable, and prone to errors and mis-interpretation.

 Why is Product content management such a critical initiative for Consumer Goods  Companies today?

This can be illustrated with cases that happened with a major global company in November 2017 in the UK.

Newly introduced brands suffered from bad eCommerce launches due to inaccurate new product listings, with information missing such as ingredients and shortened names. Consequently, it was difficult for digital shoppers to find these products online resulting in loss of sales and money wasted by the manufacturers.

Research conducted with major British retailers found that 63% of products had errors in their listings and that 71% of these new products fell outside of the top 100 search results on key product search terms, or worse, did not feature in the search results at all.

It is important to consider the paradigm shift over the last few years which shows that shoppers are taking their time to read ingredient labels. They want to know exactly what’s in the products they buy, how it was processed and even from where it was sourced to make better purchasing decisions. This is extremely important as it is a way to drive greater shopper loyalty. Research shows that 96% consumers will simply abandon their purchase if they can’t find the information they need for evaluation.

In the last 2 years, many startups have developed apps for consumers enabling them to scan the products and provide and check ratings. Some of those apps are collaborative such as Open Food Facts which relies on consumers to enrich the database. If shoppers do not regularly update the information on products or if a company does not proactively connect to this platform to send the updated product information, then, there is a great probability that product information will be outdated and mis-leading. Importantly it may not show new products in a good light, or miss value add information like recipes or worse, the notification of allergens.

Therefore, to regain consumers trust, CPG companies should urgently take back ownership of their product data that they need to disclose to their end-consumers!

What are the challenges faced by a CPG company regarding its product information management?

Today, more consumers are digital shoppers and retailers use an omnichannel strategy to better serve them. However, it is becoming more and more complex for a CPG company to distribute product content in a consistent manner to all the channels at the same time.

Retailers are requesting more rapid access to information. Often new product content is required in a maximum of 48 hours. And they require more content as well: digital content, such as different views and videos of the products are now the basic information requested to enrich the product information delivered to the digital shopper. CPG companies need to address this challenge if they want to remain visible and competitive in their market.

Moreover, the number of attributes for a single product has grown in the last 5 years from 50 to more than 500 leading to hundreds of relationships and millions of records to manage.

Internally, companies also need to manage multiple sources of product data used by several services and stakeholders (production, quality, marketing, sales, …) that need to be synchronized from time to time dedicating numerous teams to take care of these operations. In parallel, companies need to send this product information to multiple users from retailers, market places, ecommerce platforms, startups, …. making it critical for them to be able to manage these operations in a fast and consistent manner.

In conclusion, handling critical information by spreadsheet and through multiple sources of systems within the CPG companies is problematic and even ERP systems are not optimized to support product information lifecycles. But as consumers rely on product data as a source of truth, to be successful, companies need to ensure accuracy of information to succeed in today’s omnichannel reality.

The positive news is that there are new disruptive solutions to better manage the information related to products, such as PIM (Product Information Management) and DAM (Digital Asset Management) enabling CPG companies to better answer the challenges of today’s environment.

Critical technologies and capabilities for grocery manufacturing

(Source : Accenture and GMA – Seize the opportunity to drive digital transformation in CPG, 2017)

To know more about the solution to answer those challenges, check the Part II of this Post or contact us at This email address is being protected from spambots. You need JavaScript enabled to view it.. 

Cybersecurity Blog Series: Part 2

How do regulation and legislation impact global cybersecurity practices? What about incentives? In part one of this series, I discussed the importance of standards in preventing cyberattacks, particularly in our increasingly industrialized digital and IoT world. Here, I’ll explore regulations and the role of incentives in cybersecurity.

While regulations and legislation vary by country, cyberattacks are border agnostic. Attacks—both attempted and successful—targeting a facility in any one country can have detrimental consequences worldwide. Therefore, it makes sense to put in place international guardrails and agreements on cybersecurity best practices.

This includes initiatives from the International Society of Automation (ISA), including IEC 62443, a set of standards developed by ISA99 and International Electrotechnical Commission (IEC) committees to improve the safety, availability, integrity and confidentiality of components or systems used in industrial automation and control. Adopted by many countries, these standards are utilized across industrial control segments.

However, standards bodies are just one piece of a broader matrix of players, including government (and the legislation they drive), industrial plant operators/owners, vendors/suppliers and academia. All of these must collaboratively tackle the monumental and ever-evolving task of cyber attack prevention.

Legislation 

With many of today’s attacks perpetrated by malicious actors, such as nation-states, who have unlimited time, resources and funding, cyber-defense strategies and protocols set by the government can make an impact. Take for example the U.S. National Institute of Standards and Technology (NIST) framework, which is the authoritative source for cybersecurity best practices and recently expanded to address evolving identity management and supply chain topics.

Even more recently, on Nov. 28, 2018, the U.S. House of Representatives passed the SMART IoT Act in a unanimous vote, sending the bill to the U.S. Senate. The legislation, introduced by Rep. Robert Latta (R-Ohio), tasks the Department of Commerce with studying the current internet-of-things industry in the United States. The research would look into who develops IoT technologies, what federal agencies have jurisdiction in overseeing this industry and what regulations have already been developed. The outcome of this Act is a potential opportunity for positive reinforcement of fundamental cybersecurity practices to be written into law, protecting individuals and industry.

As a region, and already well established, North America also has the North American Electric Reliability Corporation (NERC) standards, which ensures by law that power system owners, operators and users comply with a specific set of standards that are meant to protect the power grid from both physical and cyberattacks. The Federal Energy Regulatory Commission, an agency that issues fines for noncompliance, backs NERC in the United States.

This is also true around the world. Germany’s IT Security Act is responsible for protecting the critical German infrastructure. In the United Kingdom, the responsibilities for safeguarding and maintaining Critical National Infrastructure (CNI) continue under the Civil Contingencies Act of 2004. The U.K. government’s work around CNI mostly takes the form of non-mandatory guidance and good practices. Their approach can best be understood by reading the U.K. National Cybersecurity Strategy document. More specific to the Industrial Automation Control Systems (IACS) industry in the United Kingdom is the excellent operational guidance document Cyber Security for Industrial Automation and Control Systems.

The European Commission published, in the autumn of 2017, a proposal for a regulation that clarifies the role of ENISA (the European Union Agency for Network and Information Security) and introduces the idea of an Information and Communication Technology cybersecurity certification, or “Cybersecurity Act.” This is a good idea, similar to IEC 62443, that seeks to standardize evaluation and certification schemes.

Other countries have similar standards or are in the process of creating them. In countries like France, standards are also carrying the weight of law.

Promoting Incentive-based Regulation

While there are different schools of thought on what works best in the regulation vs. incentive debate, it doesn’t have to be an either-or scenario. Do end-users need oversight? Yes. Do we all agree keeping equipment, software and operating protocol regularly updated is a critical step in prevention? Absolutely. Is there only one way to achieve this goal? No. Consider incentive-based regulation:

A government regulation that is designed to induce changes in the behavior of individuals or firms, in order to produce environmental, social, or economic benefits that would otherwise be prescribed by legislation[1].

An approach that incentivizes end users to adopt the latest equipment, software, training and operating protocols with national cybersecurity funding that ties back to national priorities would give policymakers the support they need to enforce current and encourage new regulation that ties back to common priorities. This tie-back is important because it fosters regulatory compliance while driving company priorities.

How can regulatory incentives be introduced to promote investment? Any major investor-owned utility looks for investments that can create the most value for their shareholders. In cases where cybersecurity-driven modernization investments are not the best choice regarding shareholder value, we should consider federal incentive mechanisms that encourage such investments when they are in the public interest. These mechanisms come in many forms. Most simplistically they could take the form of tax incentives or abatements. Investments in improving cybersecurity in some regulated sectors would come with tax write-offs.

Other, more complex incentives include things like rebates for specific investments through federally funded programs, price caps (a form of performance-based rate-making) and performance-based incentives tied to specific cybersecurity-related goals that can be considered.

Regardless of how the incentives are funded and structured, the scenario can be mutually beneficial for shareholders, owner/operators, providers and the public alike. The government encourages investing in and acquiring modern industrial automation and control systems and solutions that help prevent potentially catastrophic events from occurring; the plant or utility receives funding to reinvest in the latest technology, staff training and liability management initiatives.

In my next post, I’ll delve into the very interesting point of cybersecurity risk management accountability.

As with most aspects of cybersecurity prevention, balance is often the best practice. As a collaborative industry, we should explore a balance of regulation, standards and incentives.

For more insight from Schneider Electric on cybersecurity, download our whitepaper: “Cybersecurity Best Practices.”

[1] Oxford Reference – http://www.oxfordreference.com/view/10.1093/oi/authority.20110803100000562

The very first computers used vacuum tube analog technology and gave off tremendous amount of heat. In the 1960’s mainframe computers switched to transistor technology that required many of them to circulate liquid inside the computer to cool them. The super computers of the 70s and 80s (Crays, for example) used complex liquid cooled technology. However, microprocessor technology advancements have led to very high performance at more reasonable heat output.

Data Center Design

Servers today can be cooled by moving air and rejecting the heat through cooling towers and chillers and do not need liquid in the servers or on the processors to cool. The power profile of mainstream 2U servers has stayed around 150W. This thermal design power (TDP), sometimes called thermal design point, has been in place for roughly the last 10 years. This is because it’s reasonable to get the heat out of a 2U chassis and it’s possible to cool them with traditional data center design of precision cooling technologies.

Launch of AI demands improved cooling data center design to meet its processing power

Now we have an application that is placing massive demands on processing in data centers – artificial intelligence (AI). AI has begun to hit its stride, springing from research labs into real business and consumer applications. Because AI applications are so compute heavy, many IT hardware architects are using GPUs as core processing or as supplemental processing. The heat profile for many GPU based servers is double that of more traditional servers with a TDP of 300W vs 150W. That is why we are seeing a renaissance of sorts with liquid cooling. Liquid cooling has been kicking around in niche applications for high performance computing (HPC) applications for a while, but the new core applications for AI are putting increased demands in a much bigger way.

The benefits of liquid cooling technology can be the ability to deploy at a much higher density, a lower overall TCO driven by increased efficiency, and a greatly reduced footprint. While the benefits are impressive, the barriers are somewhat daunting. Liquid cooling has high complexity due to new server chassis designs, manifolds, piping and liquid flow rate management. There is fear of leakage from these complex systems of pipes and fittings and there is a real concern about maintaining the serviceability of servers, especially at scale.

More Data Center Design Advancements to Come, Just Like Liquid Cooling

Traditionalists will argue that it may not make sense to switch to liquid cooling and we can stretch core cooling technologies. Remember, people argued that we didn’t need motorized cars and we just needed faster horses! Or more recently, people argued that Porsche should never water cool the engine in a 911 and were outraged when it happened in 1998. This outrage has since subsided and there is now talk about a hybrid electric 911. As in cars, technology in the data center continues to advance and will continue indefinitely. Check out other trends and technology predictions I have for the data center industry in my recent blog post with Data Centre News.