Inside Microsoft Copilot: A Look At The Technology Stack

AI on laptop

As expected, generative AI took centre stage at Microsoft Build, the annual developer conference hosted in Seattle. Within a few minutes into his keynote, Satya Nadella, CEO of Microsoft, unveiled the new framework and platform for developers to build and embed an AI assistant in their applications.

Branded as Copilot, Microsoft is extending the same framework it is leveraging to add AI assistants to a dozen applications, including GitHub, Edge, Microsoft 365, Power Apps, Dynamics 365, and even Windows 11.

Microsoft is known to add layers of API, SDK, and tools to enable developers and independent software vendors to extend the capabilities of its core products. The ISV ecosystem that exists around Office is a classic example of this approach.

Having been an ex-employee of Microsoft, I have observed the company’s unwavering ability to seize every opportunity to transform internal innovations into robust developer platforms. Interestingly, the culture of “platformisation” of emerging technology at Microsoft is still prevalent even after three decades of launching highly successful platforms such as Windows, MFC, and COM.

While introducing the Copilot stack, Kevin Scott, Microsoft’s CTO, quoted Bill Gates – “A platform is when the economic value of everybody that uses it exceeds the value of the company that creates it. Then it’s a platform.”

Bill Gates’ statement is exceptionally relevant and profoundly transformative for the technology industry. There are many examples of platforms that grew exponentially beyond the expectations of the creators. Windows in the 90s and iPhone in the 2000s are classic examples of such platforms.

The latest platform to emerge out of Redmond is the Copilot stack, which allows developers to infuse intelligent chatbots with minimal effort into any application they build.

The rise of tools like AI chatbots like ChatGPT and Bard is changing the way end-users interact with the software. Rather than clicking through multiple screens or executing numerous commands, they prefer interacting with an intelligent agent that is capable of efficiently completing the tasks at hand.

Microsoft was quick in realizing the importance of embedding an AI chatbot into every application. After arriving at a common framework for building Copilots for many products, it is now extending to its developer and ISV community.

In many ways, the Copilot stack is like a modern operating system. It runs on top of powerful hardware based on the combination of CPUs and GPUs. The foundation models form the kernel of the stack, while the orchestration layer is like the process and memory management. The user experience layer is similar to the shell of an operating system exposing the capabilities through an interface.

Let’s take a closer look at how Microsoft structured the Copilot stack without getting too technical:

The Infrastructure – The AI supercomputer running in Azure, the public cloud, is the foundation of the platform. This purpose-built infrastructure, which is powered by tens of thousands of state-of-the-art GPUs from NVIDIA, provides the horsepower needed to run complex deep learning models that can respond to prompts in seconds. The same infrastructure powers the most successful app of our time, ChatGPT.

Foundation Models – The foundation models are the kernel of the Copliot stack. They are trained on a large corpus of data and can perform diverse tasks. Examples of foundation models include GPT-4, DALL-E, and Whisper from OpenAI. Some of the open source LLMs like BERT, Dolly, and LLaMa may be a part of this layer. Microsoft is partnering with Hugging Face to bring a catalogue of curated open-source models to Azure.

While foundation models are powerful by themselves, they can be adapted for specific scenarios. For example, an LLM trained on a large corpus of generic textual content can be fine-tuned to understand the terminology used in an industry vertical such as healthcare, legal, or finance.

Microsoft’s Azure AI Studio hosts various foundation models, fine-tuned models, and even custom models trained by enterprises outside of Azure.

The foundation models rely heavily on the underlying GPU infrastructure to perform inference.

Orchestration – This layer acts as a conduit between the underlying foundation models and the user. Since generative AI is all about prompts, the orchestration layer analyzes the prompt entered by the user to understand the user’s or application’s real intent. It first applies a moderation filter to ensure that the prompt meets the safety guidelines and doesn’t force the model to respond with irrelevant or unsafe responses. The same layer is also responsible for filtering the model’s response that does not align with the expected outcome.

The next step in orchestration is to complement the prompt with meta-prompting through additional context that’s specific to the application. For example, the user may not have explicitly asked for packaging the response in a specific format, but the application’s user experience needs the format to render the output correctly. Think of this as injecting application-specific into the prompt to make it contextual to the application.

Once the prompt is constructed, additional factual data may be needed by the LLM to respond with an accurate answer. Without this, LLMs may tend to hallucinate by responding with inaccurate and imprecise information. The factual data typically lives outside the realm of LLMs in external sources such as the world wide web, external databases, or an object storage bucket.

Two techniques are popularly used to bring external context into the prompt to assist the LLM in responding accurately. The first is to use a combination of the word embeddings model and a vector database to retrieve information and selectively inject the context into the prompt. The second approach is to build a plugin that bridges the gap between the orchestration layer and the external source. ChatGPT uses the plugin model to retrieve data from external sources to augment the context.

Microsoft calls the above approaches Retrieval Augmented Generation (RAG). RAGs are expected to bring stability and grounding to LLM’s response by constructing a prompt with factual and contextual information.

Microsoft has adopted the same plugin architecture that ChatGPT uses to build rich context into the prompt.

Projects such as LangChain, Microsoft’s Semantic Kernel, and Guidance become the key components of the orchestration layer.

In summary, the orchestration layer adds the necessary guardrails to the final prompt that’s being sent to the LLMs.

The User Experience – The UX layer of the Copilot stack redefines the human-machine interface through a simplified conversational experience. Many complex user interface elements and nested menus will be replaced by a simple, unassuming widget sitting in the corner of the window. This becomes the most powerful frontend layer for accomplishing complex tasks irrespective of what the application does. From consumer websites to enterprise applications, the UX layer will transform forever.

Back in the mid-2000s, when Google started to become the default homepage of browsers, the search bar became ubiquitous. Users started to look for a search bar and use that as an entry point to the application. It forced Microsoft to introduce a search bar within the Start Menu and the Taskbar.

With the growing popularity of tools like ChatGPT and Bard, users are now looking for a chat window to start interacting with an application. This is bringing a fundamental shift in the user experience. Instead and clicking through a series of UI elements or typing commands in the terminal window, users want to interact through a ubiquitous chat window. It doesn’t come as a surprise that Microsoft is going to put a Copilot with a chat interface in Windows.

Microsoft Copilot stack and the plugins present a significant opportunity to developers and ISVs. It will result in a new ecosystem firmly grounded in the foundation models and large language models.

If LLMs and ChatGPT created the iPhone moment for AI, it is the plugins that become the new apps.

Source – https://www.forbes.com/sites/janakirammsv/2023/05/26/inside-microsoft-copilot-a-look-at-the-technology-stack/?ss=cloud&sh=7a92e15a5b59

The Sobering Truth About Ransomware—For The 80% Who Paid Up

Newly published research of 1,200 organizations impacted by ransomware reveals a sobering truth that awaits many of those who decide to pay the ransom. According to research from data resilience specialists Veeam, some 80% of the organizations surveyed decided to pay the demanded ransom in order to both end the ongoing cyber-attack and recover otherwise lost data. This despite 41% of those organizations having a “do not pay” policy in place. Which only goes to reinforce the cold hard fact that cybercrime isn’t an easy landscape to navigate, something that’s especially true when your business is facing the real-world impact of dealing with a ransomware attack.

The Sobering Truth For 21% Of Ransom Payers

Of the 960 organizations covered in the Veeam 2023 Ransomware Trends Report, that paid a ransom, 201 of them (21%) were still unable to recover their lost data. Perhaps it’s a coincidence, who knows, but the same number also reported that ransomware attacks were now excluded from their insurance policies. Of those organizations with cyber-insurance cover, 74% reported a rise in premiums.

Although I feel bad for those who paid up to no avail, I can’t say I’m surprised. Two years ago, I was reporting the same truth, albeit with larger numbers, when it came to trusting cybercriminals to deliver on their promises. Back then another ransomware report, this time from security vendor Sophos, revealed that 32% of those surveyed opted to pay the ransom but a shocking 92% failed to recover all their data and 29% were unable to recover more than half of the encrypted data.

The Decision To Pay A Ransom Is Never A Binary One

Of course, as already mentioned, the decision to pay is not and never can be a totally binary one. But ,and I cannot emphasise this enough, it is always wrong.

You only have to ask the question of who benefits most from a ransom being paid to understand this. The answer is the cybercriminals, those ransomware actors who are behind the attacks in the first place. Sure, an organization may well argue that it benefits most as it gets the business back up and running in the shortest possible time. I get that, of course I do, but maybe investing those million bucks (sometimes substantially less, or more) in better data security would have been better to begin with?

But, they may well argue again, that’s what the cyber-insurance is for, paying out the big bucks if the sticky stuff hits the fan. Sure, but the answer to my original question remains the same: it’s the ransomware actors that are still winning here. They get the pay out, which empowers them to continue and hunt even more organizations.

Ransomware Has Evolved, But Security Basics Remain The Same

Then there’s the not so small matter of how most ransomware actors no longer just encrypt your data, and often your data backups, if they do so at all. Some groups have switched to stealing sensitive customer or corporate data instead, with the ransom demanded in return for them not selling it to the highest bidder or publishing it online. Many groups combine the two for a double-whammy ransomware attack. I have even reported on one company that got hit by three successful ransomware attacks, by three different ransomware actors, within the space of just two weeks.

Which brings me back to my point of ensuring your data is properly secured is paramount. Why bother paying a ransom if you don’t fix the holes that let the cybercriminals in to start with?

“Although security and prevention remain important, it’s critical that every organization focuses on how rapidly they can recover by making their organization more resilient,” Danny Allan, chief technology officer at Veeam, said. “We need to focus on effective ransomware preparedness by focusing on the basics, including strong security measures and testing both original data and backups, ensuring survivability of the backup solutions, and ensuring alignment across the backup and cyber teams for a unified stance.”

Source – https://www.forbes.com/sites/daveywinder/2023/05/30/the-sobering-truth-about-ransomware-for-the-80-percent-who-paid-up/?ss=cybersecurity&sh=191a618439f6

The Future Of Computing: Supercloud And Sky Computing

Cloud computing, multi-cloud, and hybrid-cloud are all terms we’ve become used to hearing. Now we can add “super cloud” and “sky computing” to the list of terminology that describes the computing infrastructure of the coming decade.

Although it’s hard to believe, given how ubiquitous it is today, cloud computing as a practical reality has only been around for the past decade or so. However, at that time, it revolutionized the concept of IT networking and infrastructure.

In the simplest terms, it involves providing computer storage, processing power, and applications via the internet, so users don’t need to worry about buying, installing, and maintaining hardware and software themselves.

In that time, we’ve seen the emergence of multi-cloud – which involves businesses and organizations picking and choosing services across the multitude of cloud providers – and hybrid cloud, where infrastructure is delivered via both cloud and on-premises solutions.

But technological progress never stands still, and more recently, new terms, including supercloud and sky computing, have emerged to describe what the next stage in the evolution of “infrastructure-as-a-service) might look like.

But what do they mean, and what advantages do they offer businesses and organizations? Let’s take a look at them in a little more depth and examine some of the potential use cases.

What Are Supercloud and Sky Computing?

Both of these terms, in fact, describe very similar ideas – the next stage in the evolution of cloud computing, which will be distributed across multiple providers. It will also integrate other models, including edge computing, into a unified infrastructure and user experience. Other names that are sometimes used include “distributed cloud” and “metacloud”.

This is seen as necessary because, while many organizations have made the leap to multi-cloud, the different cloud providers do not always integrate with each other. In other words, a business pursuing a multi-cloud may find itself managing multiple cloud environments, with each one operating, to some extent, as an independent entity. This can make it difficult if, for example, we want to shift applications or data from one cloud to another.

The answer proposed by the supercloud concept is to create another abstraction layer above this that operates agnostically of whatever cloud platform or platforms are running below it. This is the supercloud, where applications can be run in containers or virtual machines, interfacing with any cloud platforms underneath.

The result is separate cloud environments that operate as if they are interconnected with each other, allowing software, applications, and data to move freely between them.

This means that a business might have service agreements in place with, for example, Amazon Web Services, Google Cloud, and Microsoft Azure. Infrastructure could then be reconfigured on-the-fly through the supercloud interface to move services between these different platforms, or between servers in different geographic locations, as requirements change.

Examples of when this might be useful are when services need to be delivered to a new group of users in a new region or when a particular data center becomes overloaded. The entire application can simply be “lifted and shifted” to a new, more convenient data center or a different cloud provider.

In many deployments, supercloud combines the benefits of both hybrid and multi-cloud, as it also gives access to on-premises infrastructure and other models such as edge computing. The important part is that all of it is accessible and usable through a unified user interface, so the actual location where the data is stored and where the applications are running from is invisible to the user, who always has a consistent experience.

As well as simplifying internal infrastructure, systems, and processes, migrating to supercloud models, in theory, makes it easier for organizations to integrate and share tools or data with their clients and partners, who may be using completely different platforms to them.

What Are The Key Challenges With Supercloud and Sky Computing?

Right now, a major challenge when it comes to setting up supercloud infrastructure is security. This is because different cloud providers might have different security protocols, and any data and applications that have to operate across multiple providers will need to be configured in a way that’s compatible with all of them.

Using more cloud services simply means that there are more surfaces where data can be exposed to possible security breaches. A priority for those laying the foundations for supercloud systems will be creating automated solutions that run in the supercloud layer in order to offer protection regardless of what cloud service or on-premises infrastructure is being used.

Fundamentally, cloud computing is designed to be a final stepping-stone on the road to the commoditization of computing infrastructure. This objective is set out in a paper published in 2021 by the University of California, Berkley professors Ion Stoica and Scott Shenker, titled From Cloud Computing to Sky Computing.

Stoika and Shenker were early proponents of the cloud computing paradigm, writing about it as early as 2009. Back then, they predicted that it could lead to compute and storage infrastructure becoming “utilities,” similar to electricity and internet connectivity. This didn’t happen – largely due to the emergence of different standards between different cloud service providers (Amazon, Google, Microsoft, and so on). Supercloud (or sky computing, as Stoica and Shenker prefer to term it) may be the way to finally make it happen.

They do, however, posit that while the technical challenges will be fairly simple to overcome – creating services and standards to communicate between different clouds, for example – might encounter some resistance from the cloud providers themselves.

Will Amazon or Google welcome the idea of “sharing” their cloud customers with competing services? Stoica and Shenker point to the existence of applications such as Google Anthos – an application management platform that runs on Google Cloud as well as AWS and other cloud platforms – as evidence that they might be becoming receptive to the idea.

Altogether, supercloud is an exciting concept that has the potential to make it simpler and more affordable for organizations to leverage powerful computing infrastructure. This has to be good news all around, hopefully making it easier for innovators to bring us cloud-based tools and apps that further enrich our lives.

Source: The Future Of Computing: Supercloud And Sky Computing

Cloud computing hub to launch with £2m EPSRC funding

A new £2 million hub, co-led by the University of York, has been launched to investigate the future potential of cloud computing.

The Hub, part of a £6m investment by the Engineering and Physical Sciences Research Council (EPSRC), will bring researchers together to drive innovations in cloud computing systems, linking experts with the wider academic, business and international communities. 

Future communication

The team behind the initiative – called Communications Hub for Empowering Distributed Cloud Computing Applications and Research (CHEDDAR) – believes it is imperative that new communications systems are built to be safe, secure, trustworthy, and sustainable, from the tiniest device to large cloud farms. 

Co-lead of the new hub, Dr Poonam Yadav, from the University’s Department of Computer Science, said: “The three communication hubs from EPSRC is a much-needed and timely initiative to bring cohesive and interoperable current and future communication technologies to enable emerging AI, neuromorphic and quantum computing applications.

“CHEDDAR is strongly built on the EDI principle, providing early career researchers opportunities to engage with far-reaching ideas along with national and international academic and industry experts.”

Industry 

Jane Nicholson, EPSRC’s Director for Research Base, said: “Digital communications infrastructure underpins the UK’s economy of today and tomorrow and these projects will help support the jobs and industry of the future.

“Everybody relies on secure and swift networking and EPSRC is committed to backing the research which will advance these technologies.”

Goals

Led by Imperial College London, and in collaboration with partners from the universities of Cranfield, Leeds, Durham and Glasgow, the goals of CHEDDAR are to:

Develop innovative collaboration methods to engage pockets of excellence around the UK and build a cohesive research ecosystem that nurtures early career researchers and new ideas.  

Inform the design of new communication surfaces that cater to emerging computing capabilities (such as neuromorphic, quantum, molecular), key infrastructures (such as energy grids and transport), and emerging end-user applications (such as autonomy) to answer problems that we cannot solve today. 

Create integrated design of hierarchical connected human-machine systems that promote secure learning and knowledge distribution, resilience, sustainable operations, trust between human and machine reasoning, and accessibility in terms of diversity and inclusion. 

Source: Cloud computing hub to launch with £2m EPSRC funding

Lack of cybersecurity training is leaving businesses at risk

Employees are constantly bombarded with phishing

Businesses are putting themselves at risk of all kinds of cyber-attacks due to poor practices when it comes to educating and training the workforce.

A new report from Yubico, found less than half (42%) of UK businesses it surveyed held mandatory, frequent, cybersecurity training. 

There are many things employees could be taught, which would improve the cybersecurity posture of organizations, the report further suggested. For example, roughly half (47%) often write down, or share their passwords  which is one of the most common mistakes when it comes to safeguarding a password. 

Resetting the password 

Elsewhere, the report found that many workers (33%) allow other people to use their work-issued device, while more than half (58%) use personal devices for work. 

A similar percentage (49%) do vice-versa, as well, by using a work-issued device for personal use, which is another cybersecurity red flag. Finally, half (48%) have been exposed to a cyberattack such as phishing, without reporting the incident to their IT and cybersecurity teams. 

Even when an employee gets exposed to a cyberattack, their organization does very little to amend the issue. “Very few” companies implemented phishing-resistant cybersecurity methods in response to being targeted, a third (28%) simply had their passwords reset, and just a quarter (28%) were made to attend cybersecurity training. 

“Cyber attacks, and how to prevent them, should be top of mind for every organization. However, our research reveals a remarkable disparity between the risks of cyber-attacks and businesses’ attitudes toward them,” commented Niall McConachie, regional director (UK & Ireland) at Yubico.

For McConachie, businesses should deploy multi-factor authentication (MFA) as soon as possible, and consider FIDO2 security keys. The latter “have been proven to be the most effective phishing-resistant option for business-wide cybersecurity”, he says. 

“By removing the reliance on passwords, MFA and strong 2FA are more user-friendly and can be used for both personal and professional data security. This is especially important as cyber-attacks are not limited to companies but can directly target customers and employees too.”

One of the most used-used passwords – “123456” – is still in use today, despite being known by virtually every cybercriminal out there, the report concluded.

Source: Lack of cybersecurity training is leaving businesses at risk

AI chatbots making it harder to spot phishing emails, say experts

Poor spelling and grammar that can help identify fraudulent attacks being rectified by artificial intelligence

Chatbots are taking away a key line of defence against fraudulent phishing emails by removing glaring grammatical and spelling errors, according to experts.

The warning comes as policing organisation Europol issues an international advisory about the potential criminal use of ChatGPT and other “large language models”.

Phishing emails are a well-known weapon of cybercriminals and fool recipients into clicking on a link that downloads malicious software or tricks them into handing over personal details such as passwords or pin numbers.

Half of all adults in England and Wales reported receiving a phishing email last year, according to the Office for National Statistics, while UK businesses have identified phishing attempts as the most common form of cyber-threat.

However, a basic flaw in some phishing attempts – poor spelling and grammar – is being rectified by artificial intelligence (AI) chatbots, which can correct the errors that trip spam filters or alert human readers.

“Every hacker can now use AI that deals with all misspellings and poor grammar,” says Corey Thomas, chief executive of the US cybersecurity firm Rapid7. “The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case. We used to say that you could identify phishing attacks because the emails look a certain way. That no longer works.”

Data suggests that ChatGPT, the leader in the market that became a sensation after its launch last year, is being used for cybercrime, with the rise of “large language models” (LLM) getting one of its first substantial commercial applications in the crafting of malicious communications.

Data from cybersecurity experts at the UK firm Darktrace suggests that phishing emails are increasingly being written by bots, letting criminals overcome poor English and send longer messages that are less likely to be caught by spam filters.

Since ChatGPT went mainstream last year, the overall volume of malicious email scams that try to trick users into clicking a link has dropped, replaced by more linguistically complex emails, according to Darktrace’s monitoring. That suggests that a meaningful number of scammers drafting phishing and other malicious emails have gained some ability to draft longer, more complex prose, says Max Heinemeyer, the company’s chief product officer – most likely an LLM like ChatGPT or similar.

“Even if somebody said, ‘don’t worry about ChatGPT, it’s going to be commercialised’, well, the genie is out of the bottle,” Heinemeyer said. “What we think is having an immediate impact on the threat landscape is that this type of technology is being used for better and more scalable social engineering: AI allows you to craft very believable ‘spear-phishing’ emails and other written communication with very little effort, especially compared to what you have to do before.”

“Spear-phishing”, the name for emails that attempt to coax a specific target into giving up passwords or other sensitive information, can be difficult for attackers to convincingly craft, Heinemeyer said, but LLMs such as ChatGPT make it easy. “I can just crawl your social media and put it to GPT, and it creates a super-believable tailored email. Even if I’m not super knowledgable of the English language, I can craft something that’s indistinguishable from human.”

In Europol’s advisory report the organisation highlighted a similar set of potential problems caused by the rise of AI chatbots including fraud and social engineering, disinformation and cybercrime. The systems are also useful for walking would-be criminals through the actual steps required to harm others, it said. “The possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime.”

This month a report by Check Point, a US-Israeli cybersecurity firm, said it had used the latest iteration of ChatGPT to produce a credible-seeming phishing email. It circumvented the chatbot’s safety procedures by telling the tool that it needed a template of a phishing email for an employee awareness programme.

Google has also joined the chatbot race, launching its Bard product in the UK and US last week. Asked by the Guardian to draft an email to persuade someone to click on a malicious-seeming link, Bard complied willingly if lacking subtlety: “I am writing to you today to share a link to an article that I think you will find interesting.”

Contacted by the Guardian, Google pointed to its “prohibited use” policy for AI, which says users must not use its AI models to create content for “deceptive or fraudulent activities, scams, phishing, or malware”.

OpenAI, creator of ChatGPT, has been contacted for comment. The company’s terms of use state that users “may not (i) use the services in a way that infringes, misappropriates or violates any person’s rights”.

Source: AI chatbots making it harder to spot phishing emails, say experts

Apple says these are the best security keys around

Apple has revealed what it believes are the best security keys to add an extra layer of protection to your digital world.

The recent release of iOS 16.3 saw Apple add security key compatibility to its iPhone and iPad devices – as well as to its laptops and desktops with the macOS 13.2 update.

Now, in a support document, the company has selected its recommendations for the best physical security keys to use with its devices, which comply with FIDO standards – the foremost alliance on credential security that most of big tech are signed up to.

PHYSICAL PROTECTION

Security keys are physical devices that you can use to authenticate a login to a website or service – a type of multi-factor authentication (MFA) method. The difference compared to other, more common MFA methods – such as using an SMS message or authenticator app on another device – is that security keys are not connected to your network, so are protected from any potential compromises on it.

The downside of using physical security keys, however, is that there are no copies of their associated decryption keys stored on a cloud network, meaning that if you were to misplace them, you won’t be able to log in. Apple doesn’t keep any backups either, so you may be permanently locked out of your account.

You can use security keys when logging into your AppleID, in which case they will replace the usual six-digit codes that comprise the standard MFA process. However, you cannot use security keys to login into child or managed AppleID accounts, nor can you use them with iCloud for Windows.

To use them with Apple Watches, they have to be paired with your own phone, not a family member’s.

Apple has recommended what it believes to be three good examples of security keys, which are the YubiKey 5C NFC, YubiKey 5Ci and FEITAN ePass K9 NFC USB-A. The first two it says will work with most current Macs and iPhones, whilst the last will work with older models of these devices, since it uses a USB-A connection rather than the latest USB-C featured on the other two.

More generally, the company stated that any security keys you opt for should be FIDO certified and, of course, have the right connection type for your device.

Apple states that security keys with a USB-C connector work with most of its devices, and those that use near-field communication (NFC) only work with iPhones from iPhone 6 onwards. These connect wirelessly to your device, but do not use your wi-fi network, so are safe from prying eyes.

Although security keys are not essential to stay safe, using one of the best password managers pretty much is.

Source: https://www.techradar.com/

What is quantum computing and how will quantum computers change the world?

Depending on who you ask, some say that quantum computers could either break the Internet, rendering pretty much every data security protocol obsolete, or allow us to compute our way out of the climate crisis.

These hyper-powerful devices, an emerging technology that exploits the properties of quantum mechanics, are much buzzed about.

Only last month, IBM unveiled its latest quantum computer, Osprey, a new 433 qubit processor that is three times more powerful than its predecessor built only in 2021.

But what is all the hype about?

Quantum is a field of science that studies the physical properties of nature at the scale of atoms and subatomic particles.

Proponents of quantum technology say these machines could usher in rapid advances in fields like drug discovery and materials science – a prospect that dangles the tantalising possibility of creating, for example, lighter, more efficient, electric vehicle batteries or materials that could facilitate effective CO2 capture.

With the climate crisis looming, and technology with a hope of solving complex issues like these are bound to draw keen interest.

Little wonder then, that some of the largest tech companies in the world – Google, Microsoft, Amazon, and, of course, IBM to name a few – are investing heavily in it and angling to stake their place in a quantum future.

How do quantum computers work?

Given these utopic-sounding machines are drawing such frenzied interest, it would perhaps be useful to understand how they work and what differentiates them from classical computing.

Take every device that we have today – from the smartphones in our pockets to our most powerful supercomputers. These operate and have always operated on the same principle of binary code.

Essentially, the chips in our computers use tiny transistors that function as on/off switches to give two possible values, 0 or 1, otherwise known as bits, short for binary digits.

These bits can be configured into larger, more complex units, essentially long strings of 0s and 1s encoded with data commands that tell the computer what to do: display a video; show a Facebook post; play an mp3; let you type an email, and so on.

But a quantum computer?

These machines function in an entirely different way. In the place of bits in a classical computer, the basic unit of information in quantum computing is what’s known as a quantum bit, or qubit. These are typically subatomic particles like photons or electrons.

The key to a quantum machine’s advanced computational power lies in its ability to manipulate these qubits.

“A qubit is a two-level quantum system that allows you to store quantum information,” Ivano Tarvenelli, the global leader for advanced algorithms for quantum simulations at the IBM Research Lab in Zurich, explained to Euronews Next.

“Instead of having only the two levels zero and one that you would have in a classical calculation here, we can build a superposition of these two states,” he added.

Superposition

Superposition in qubits means that unlike a binary system with its two possible values, 0 or 1, a qubit in superposition can be 0 or 1 or 0 and 1 at the same time.

And if you can’t wrap your head around that, the analogy often given is that of a penny.

When it is stationary a penny has two faces, heads or tails. But if you flip it? Or spin it? In a way, it is both heads and tails at the same time until it lands and you can measure it.

And for computing, this ability to be in multiple states at the same time means that you have an exponentially larger amount of states in which to encode data, making quantum computers exponentially more powerful than traditional, binary code computers.

Quantum entanglement

Another property crucial to how quantum computing works is entanglement. It’s a somewhat mysterious feature of quantum mechanics that even baffled Einstein in his time who declared it “spooky action at a distance”.

When two qubits are generated in an entangled state there is a direct measurable correlation between what happens to one qubit in an entangled pair and what happens to the other, no matter how far apart they are. This phenomenon has no equivalent in the classical world.

“This property of entanglement is very important because it brings a much, much stronger connectivity between the different units and qubits. So the elaboration power of this system is stronger and better than the classical computer,” Alessandro Curioni, the director of the IBM Research Lab in Zurich, explained to Euronews Next.

In fact, this year, the Nobel Prize for physics was awarded to three scientists, Alain Aspect, John Clauser, and Anton Zeilinger, for their experiments on entanglement and advancing the field of quantum information.

Why do we need quantum computers?

So, in an admittedly simplified nutshell, these are the building blocks of how quantum computers work.

But again, why do we necessarily need such hyper-powerful machines when we already have supercomputers?

“[The] quantum computer is going to make, much easier, the simulation of the physical world,” he said.

“A quantum computer is going to be able to better simulate the quantum world, so simulation of atoms and molecules”.

As Curioni explains, this will allow quantum computers to aid in the design and discovery of new materials with tailored properties.

“If I am able to design a better material for energy storage, I can solve the problem of mobility. If I am able to design a better material as a fertiliser, I am able to solve the problem of hunger and food production. If I am able to design a new material that allows [us] to do CO2 capture, I am able to solve the problem of climate change,” he said.

Undesirable side effects?

But there could also be some undesirable side effects that have to be accounted for as we enter the quantum age.

A primary concern is that quantum computers of the future could be possessed of such powerful calculation ability that they could break the encryption protocols fundamental to the security of the Internet that we have today.

“When people communicate over the Internet, anyone can listen to the conversation. So they have to first be encrypted. And the way encryption works between two people who haven’t met is they have to rely on some algorithms known as RSA or Elliptic Curve, Diffie–Hellman, to exchange a secret key,” Vadim Lyubashevsky, a cryptographer at the IBM Research Lab in Zurich, explained.

“Exchanging the secret key is the hard part, and those require some mathematical assumptions which become broken with quantum computers”.

In order to protect against this, Lyubashevsky says that organisations and state actors should already be updating their cryptography to quantum-safe algorithms ie. ones that cannot be broken by quantum computers.

Many of these algorithms have already been built and others are in development.

“Even if we don’t have a quantum computer, we can write algorithms and we know what it will do once it exists, how it will run these algorithms,” he said.

“We have concrete expectations for what a particular quantum computer will do and how it will break certain encryption schemes or certain other cryptographic schemes. So, we can definitely prepare for things like that,” Lyubashevsky added.

“And that makes sense. It makes sense to prepare for things like that because we know exactly what they’re going to do”.

But then there is the issue of data that already exists which hasn’t been encrypted with quantum-safe algorithms.

“There’s a very big danger that government organisations right now are already storing a lot of Internet traffic in the hopes that once they build a quantum computer, they’ll be able to decipher it,” he said.

“So, even though things are still secure now, maybe something’s being transmitted now that is still interesting in ten, 15 years. And that’s when the government, whoever builds a quantum computer, will be able to decrypt it and perhaps use that information that he shouldn’t be using”.

Despite this, weighed against the potential benefits of quantum computing, Lyubashevsky says these risks shouldn’t stop the development of these machines.

“Breaking cryptography is not the point of quantum computers, that’s just a side effect,” he said.

“It’ll have hopefully a lot more useful utilities like increasing the speed with which you can discover chemical reactions and use that for medicine and things like that. So this is the point of a quantum computer,” he added.

“And sure, it has the negative side effect that it’ll break cryptography. But that’s not a reason not to build a quantum computer, because we can patch that and we have patched that. So that’s sort of an easy problem to solve there”.

Source: What is quantum computing and how will quantum computers change the world? | Euronews

Top edge computing platforms in 2022

Edge computing helps reduce latency of data processing. See which edge computing platform is right for your business.

As modern technology continues to advance in ways that satisfy the human desire for instant gratification, consumers are placing more emphasis on speed as a key feature when choosing their product vendors.

Whether you choose to blame this on the world’s introduction to old-school instant messaging or Amazon’s two-day shipping, at the end of the day, the demand for speed affects businesses and organizations like it never has before. But, this demand also means that businesses and organizations must step up their operations to keep up with the competition.

Software solutions that can enable companies to pick up the pace and complete their processes at a faster rate are highly favored, with this feature valued second only to reliability. Fortunately, organizations looking for a way to advance in their data processing can adopt edge computing technology in order to carry out their operations quickly but still trust that their data is secure.

Top edge computing products

Azure Stack Edge

Azure Stack Edge is a hardware as a service that enables organizations to access, utilize and analyze their applications and data locally.

Users can run their containerized applications at the edge location where data is generated and gathered. From there, the data can be analyzed, transformed and filtered, and users can control the data that they choose to send to the cloud. Their edge device also serves as a cloud storage gateway for easy data transfers between the cloud and the edge location.

Working with Azure’s edge solution makes it easy to utilize Azure’s other integratable products, so organizations can generate and train their machine learning (ML) models in Azure and benefit from quick data analysis and insight access.

Azure has several versions of their edge devices within their Azure Stack Edge Pro Series, granting users and organizations more options with a greater selection of features and capabilities to choose from, so they can get the tool that suits their needs.

HPE Edgeline

HPE Edgeline supports edge computing and processing through its various converged edge systems. Its systems provide IT functionality optimized for edge operating environments, enabling users to benefit from edge storage, computing and management.

Their solutions are purpose-built for the edge, with autonomous operations, local decision-making in real-time, and easy scaling across sites and locations. As for security, HPE Edgeline Integrated System Manager provides IT-grade security to support the deployment and operation of their Edgeline systems.

The HPE Edgeline Converged Edge systems serve as a distributed converged edge compute model, so users can manage their operations and data in real time, even without an internet connection. In addition, its systems connect open standards-based operations technology (OT) data acquisition and control technologies directly to the user’s organizational IT system, reducing latency and saving space.

HPE has various enterprise-class converged edge system tools for customers to choose from, with features optimized for different use cases. Additionally, HPE’s Edgeline OT Link Platform software is also offered and supports users’ edge activities like data flow and integration management.

ClearBlade

ClearBlade’s technology works to streamline edge data by connecting device sensors or event data and transporting it into cloud data lakes, enterprise systems or artificial intelligence (AI) tools through built-in integrations. Its software can connect via REST, MQTT and sockets in addition to its prebuilt connections for third-party systems.

The solution lets users choose to keep their functions local within their edge locations or transfer them within cloud storage and vice versa. Users can also complete various data processes at the edge, including data analysis, modification, routing, storage and management.

ClearBlade keeps users’ data secure with encryption, authentication and authorization of application programming interface (API) access. It enables connections across all user clouds, gateways and devices with various protocols.

Another aspect of its edge computing technology is offline continuity, which ClearBlade advertises as a perk to its software. Even when an internet connection is lost, all edge devices are able to continue their real-time behaviors.

Eclipse ioFog

The Eclipse Foundation’s Eclipse ioFog is an edge computing platform for processing enterprise-scale data and applications at the edge. By processing users’ data at the point of creation with an edge-centric compute architecture, users can gain more functionality and greater security for all of their data and application processes.

ioFog’s universal edge computing platform enables users to create and remotely deploy their microservices to their edge computing devices by providing a common computing platform that lets software run on any device.

Users can deploy and manage their multiple edge devices at once as an edge compute network, which ioFog manages automatically. ioFog can manage and transfer any data type and supports native Geofencing of users’ data, notes and routing.

To secure users’ edge activities, each node within the edge compute network is part of a distributed trust network and is constantly validating security protocols with all of the other nodes, monitoring for deviations. The data transfer and communications between notes occur via session-based MicroVPNs, which ioFog creates as a method of enforcing security for its users.

Google Distributed Cloud Edge

Google Distributed Cloud Edge users can now maintain their data use and storage according to their workload needs and requirements by utilizing any of Google’s 140+ network edge locations worldwide or their own localized, customer-owned edge locations. Google also supports Google Distributed Cloud services across the customer’s operator’s edge network and customer-owned data centers.

Users can use the open-source platform Anthos on their Google-managed hardware at their edge location to run their applications securely on a remote services platform. This way, they can locally process their data and transfer or modernize their applications with Google Cloud services.

By leveraging the capabilities of Google Distributed Cloud Edge, users can run local data processing, modernize their on-premises environments, run low-latency edge compute workloads and deploy private 5G/LTE solutions.

The software also connects with third-party services, granting greater accessibility within customers’ own environments.

Alef Private Edge Platform

Alef’s Edge API Platform enables organizations to manage their applications at the edge through mobile connections. In addition, users can develop their own private mobile LTE networks with API connections and firewall protection.

The APIs allow users to manage their mobile connectivity for Industry 4.0 applications. Additionally, deploying mobile networks as a service at the edge can allow users to create an easy-to-use private LTE network without on-site mobile network installation.

Alef’s system increases speed for users, as connecting to their edge enables them to access services within 50 milliseconds of any U.S. enterprise. Furthermore, by simplifying network complexities, Alef has reduced the time to launch to 60 minutes. And by orchestrating their operations and workloads at the edge with their core IP, organizations benefit from lowered latency for their applications.

Finally, leveraging Alef’s edge solutions means being able to connect to any spectrum, any EPC (Evolved Packet Core) and any cloud. Users can connect to 5G/4G/3G spectrums or their own Wi-Fi and manage data traffic across any cloud provider. And Alef is agnostic, so organizations can partner with their EPC or choose to bring their own.

Cisco Edge Computing Solutions

Cisco offers several edge computing solutions for users to deploy their services on their own developed edge computing infrastructure.

Users can design an edge computing infrastructure for their workloads that enable them to separate their network functions and optimize their resources with software-centric solutions that they can procure separately. Additionally, their fixed edge and mobile networks can share 5G core-based infrastructure coverage for greater efficiency in their operational processes.

Application developers can benefit from using Cisco’s open-edge computing model. IT can enable them to mitigate congestion in the core and meet local demands, as applications can access information about local conditions in real time. Additionally, close proximity to subscribers and real-time network data access can enable a better application user experience.

Cisco’s edge computing model can deliver high-quality data and application performance and security. By distributing the users’ computing capacity to the edge, users can benefit from lower latency to end devices, greater network efficiency with edge offloading and reduced costs for data transportation.

Infiot ZETO (Netskope Borderless WAN)

Infiot, recently acquired by Netskope, is a secure access service edge (SASE) platform that provides edge intelligence with AI-driven components. Netskope Borderless WAN will now integrate Infiot’s ZETO technology to further its edge functionalities for Netskope customers.

The combined technology of Infiot and Netskope will now be able to provide further built-in routing, policy-based traffic control, wired and wireless networking, and integrated network security functions for edge deployment.

Furthermore, Netskope customers will be able to utilize cloud-first networking through the use of Netskope SASE Gateways for secure connections between any enterprise location.

The solution’s developments will help improve Netskope customers’ performance speed, cloud visibility and application activity with the addition of this technology.

What is edge computing?

Edge computing is the process of utilizing a software system to process data closer to the data’s source and use location. Processing data with this solution can reduce the time it would take to analyze the data compared to having first to transfer it to a data center or cloud. This is because it shortens the latency time that would be required to move the data back and forth.

How do edge computing products benefit users?

Organizations that choose to store and leverage their business data are constantly increasing their data volumes. The ever-increasing volume and complexity of this data have caused the need for more space and have led to latency issues. However, edge computing software is capable of more significant amounts of data with reduced latency, which means more easily accessible data insights for organizations.

However, an edge computing solution can do more than just reduce the data processing time compared to cloud computing. Organizations can gain a vast range of additional benefits by processing data at a location at the edge of a network and utilizing systems at those physical locations.

Organizations that utilize cloud-based data analysis tools across all of their business enterprises can increase their risk of security breaches and potential data loss, as an attack on a connected solution could affect all of their organizational operations. With edge computing, a security breach would have less impact on the entire organization, and only the transferred data could be affected.

Conducting on-site data analysis with edge computing devices also means that the analyzed data is safeguarded by the localized firewalls that protect the enterprise.

Edge computing can also allow organizations to control their data flow and storage that takes place within their edge locations, enabling them to manage their data in a way that lowers data redundancy, reduces bandwidth and lowers operation costs.

Edge computing is also arguably more reliable than cloud computing. Using edge devices to save and process data locally can mean having greater access to data than entrusting remorse data storage location, as edge device users won’t need to worry about their internet connectivity issues when trying to process and access their data. Regardless of connectivity issues, organizations can have access to their network data stored in their edge solutions.

Primary features to look for in edge computing software

There are several features and capabilities that are common in edge computing products. Edge computing software solutions generally provide features that enable real-time access to local information to support immediate action. In addition, many edge computing software solutions will provide automated features that can occur regardless of unreliable or inaccessible internet connectivity.

To keep users’ data and applications secure edge computing software should come with security features. Common security features of these solutions may include on-premises security, isolated operating environments, edge device monitoring and authorizations, authentication, and encryption layers. Alternatively, edge software solutions may support connections with third-party security services.

Other beneficial features of edge computing systems should support the management of the organization’s data storage, analysis and transfer processes. This can include features that provide users with visibility into their data center and operations and even cloud operations.

In addition, granting users greater control over their data flows can help them facilitate easy scaling across locations, supported by an easily manageable and configurable architecture.

Source: Top edge computing platforms in 2022 | TechRepublic

Quality Management with ISO 9001 – The 7 Key Principles

In the last article we found out that ISO 9001 is the international standard that specifies requirements for a quality management system (QMS). And that most businesses use this standard to demonstrate the ability to consistently supply products and services that meet customer and regulatory requirements.

In this article we are going to look at seven focus areas to help to businesses to keep these standards ISO 9001 has seven key principles that it pushes as important:

Engagement of people

Making sure the management system involves your team

Senior management aren’t the only people who ISO 9001 is for. Your whole organisation contribute towards it’s processes. If you wanted to fully benefit from your quality management ISO then you are going to need to openly discuss issues and share knowledge and experience with your team. It is paramount that everyone in your company understands their contribution to its success and feels valued for it. This will demonstrate your businesses commitment to improving quality and will help to achieve certification.

You could possibly want to consider some awareness training to help to raise awareness of ISO 9001 and the benefits it brings. There are plenty of online courses that could be very informative and useful for your business personnel.

Customer Focus

Focus on your customers and their needs

A really great way of showing your commitment to quality us developing a strong customer focus. So that you can strengthen your business and its performance even further it is very important to gather customer feedback good or bad. This can help you to spot non conformities and improve your processes.

Your company should take into account not only the interests of the consumers, but also those of other stakeholders, including owners, employees, suppliers, investors, and the general public.

Leadership

Develop s strong management team

Strong leadership entails having a distinct vision for the future of your business. Effectively communicating this vision will guarantee that every team member is working toward the same goals, providing your organisation a sense of unity. As a result, employee motivation and productivity may increase.

Process Approach

Create a process culture

The ISO 9001 Standard’s Plan Do Check Act (PDCA) principle will assist you in fostering a process-driven culture throughout your organisation. This is a tried-and-true method to guarantee that you efficiently plan, resource, and manage your processes and interactions.

You may align operations for improved efficiency and make it easier to reach your goals by managing the many sections of your organisation as a whole. You can find areas for improvement by measuring and analysing these interconnected processes.

Improvement

Drive continual improvement

The ISO 9001 quality management system depends on continuous improvement, which is why it should be your company’s main goal. You can uncover ways to enhance and strengthen your business by putting processes in place for identifying risks and opportunities, spotting and resolving non-conformities, and measuring and monitoring your efforts.

Evidence-Based Decision Making

Base your decisions on facts

Making informed judgments requires access to accurate and trustworthy data. For instance, you need the appropriate evidence to identify the underlying reason of a non-conformity. Ensure that individuals who require information can access it and maintain open lines of communication.

Relationship Management

Develop mutually beneficial relationships with suppliers

It’s possible for your suppliers to give you a competitive edge, but this demands a partnership based on trust. Long-term, mutually beneficial methods must be balanced with short-term financial rewards in order to forge such enduring partnerships with suppliers and other interested parties.

Benefits of the Quality Principles

During the ISO 9001 certification process, putting these seven quality concepts into practise can assist you in fulfilling important Standard requirements. As a result, you will be able to raise employee engagement and productivity, customer happiness and loyalty, and resource usage.

By putting these seven quality concepts into practise, you may help yourself meet crucial Standard requirements during the ISO 9001 certification process. You will be able to increase resource consumption, customer satisfaction and loyalty, employee engagement, and productivity as a result.

We’re 4TC Managed IT Services

4TC can support you with all the services you need to run your business effectively, from email and domain hosting to fully managing your whole IT infrastructure.

Setting up a great IT infrastructure is just the first step.  Keeping it up to date, safe and performing at its peak requires consistent attention.

So we can act as either your IT department or to supplement an existing IT department. We pride ourselves in developing long term relationships that add value to your business with high quality managed support, expert strategic advice, and professional project management.