Tag Archive for: News Posts

Microsoft is testing an AI-powered image creator for Windows 11 Paint

Based on OpenAI’s DALL-E text-to-image platform, Paint Cocreator will conjure up images in Windows Paint based on your descriptions.

Those of you who’ve ever struggled to draw your own artwork in Microsoft Paint will soon be able to turn to AI to automatically generate your desired images.

In a Windows Insider blog post published Wednesday, Microsoft announced a new AI-powered tool for the Paint app in Windows 11 that will create images for you. Known as Paint Cocreator and based on OpenAI’s DALL-E text-to-image platform, the feature is currently on its way to Windows Insiders.

Like other AI-based image generators, Paint Cocreator will cook up images based on your descriptions. You can submit anything from a few general words to a couple of sentences. But you’ll want to be as descriptive as possible to improve your odds of getting just the right image.

You can also select a particular style of art. When you’re ready, the tool will create and display three different images from which to choose. Select the one you like and you can then fine-tune it in the Paint canvas if you wish.

To let you use Cocreator, however, Microsoft is imposing a credits system as virtual currency, according to its support page for the tool.

By Joining Cocreator, you start you off with 50 credits. Each time you create an image, one credit is used. You can always see how many credits you have left by checking the bottom right area of the Cocreator pane.

Microsoft didn’t explain what would happen if you run out of credits and need more. But hopefully that will get ironed out before the tool hits the release version of Windows 11, assuming this credits system sticks around.

To ensure that Cocreator is being used responsibly, Microsoft will also employ content filtering. The aim here is to prevent people from creating images that are considered harmful, offensive, or inappropriate. The content filter is based on certain standards, such as human dignity, diversity, and inclusion, according to Microsoft.

With all the buzz and interest surrounding AI, image generators have taken off among people who need to generate drawings, paintings, artwork, and other types of graphics. OpenAI offers its DALL-E image creator on which other tools are based. The company is currently testing a new version known as DALL-E 3.

Microsoft’s DALL-E-powered Bing Image Creator works as a standalone tool and as part of its Bing AI chatbot. Other popular tools include MidjourneyStable Diffusion, DreamStudio, and Craiyon.

To try Cocreator at this point, you’ll need to be registered with the Windows Insider program for Windows 11. You’ll also need to be running the Dev or Canary build of Windows 11. And for now, the tool is available only in the US, UK, France, Australia, Canada, Italy, and Germany.

Make sure you’ve downloaded and installed the latest updates for your Windows 11 build. Once Cocreator is accessible, you’ll still need to join a waitlist to use the tool as Microsoft is rolling it out slowly at first. To do this, click the Waitlist button in the Cocreator pane when the tool shows up in your build. You’ll receive an email notice when you’ve been approved to use it.

Source: Microsoft is testing an AI-powered image creator for Windows 11 Paint | ZDNET

The IT Support Dilemma: Your Ultimate Guide to Business Survival

Are you feeling overwhelmed with all the technical problems in your business? Trying to manage IT issues on top of everything else can be extremely stressful, but it doesn’t have to be. This guide covers actionable advice on how to handle typical IT support dilemmas so that you can focus your time and energy on what matters most: growing your business.

You’ll learn tips and tricks for budgeting resources, finding reliable help, preventing network security threats, and much more – allowing you to build a strong foundation for success. So fasten your seatbelt as we dive deeper into the world of successful IT strategies!

Understanding the IT Support Dilemma

As technology continues to advance, businesses are increasingly reliant on IT support to ensure their operations run smoothly. However, with the IT support dilemma, businesses must navigate the challenge of providing adequate tech support to employees while managing costs. 

Lackluster IT support can lead to lost productivity and a potential loss of customers, making it essential for businesses to confront this challenge head-on. Investing in effective IT support can help businesses stay ahead of the curve and ensure their technology stays up-to-date, reliable, and secure. Whether it’s through an in-house IT team or outsourcing to a third-party provider, confronting the IT support dilemma is essential for any business that wants to remain competitive in today’s tech-driven landscape. Plus, with the right IT support, your employees can focus on their core responsibilities without getting bogged down by technical issues, a win-win situation for everyone involved.

How To Set Up Your IT Support System

Setting up an IT support system can be a daunting task, especially if you’re not familiar with the process. However, it’s undeniable that a well-planned support system can make all the difference when it comes to efficiency and issue resolution. The first step in the process is evaluating your current setup. Take a look at what’s currently in place and determine what’s working and what’s not. Once you have an understanding of your current situation, it’s time to explore the different options available. 

From hiring in-house support to outsourcing to a third-party vendor, there are pros and cons to each approach. Furthermore, when it comes to outsourcing, you can always find a guide to outsourcing IT support online. That way, you can make an informed decision that is best for your business. When making your decision, consider your budget, business needs, and long-term goals. With the right IT support system in place, you’ll be able to focus on what really matters – growing your business.

Secure Your Network

Cybersecurity is a growing concern for individuals and businesses alike. With the increased use of technology and the internet, it’s more important than ever to secure your network and protect your data from malicious attacks. 

Best practices for strengthening cybersecurity include regularly updating your software and operating systems, using strong and unique passwords, implementing antivirus and anti-malware software, and limiting access to sensitive data. Taking these steps and remaining vigilant can only help you with keeping your information safe from cyber threats. Don’t wait until it’s too late to take action – start securing your network today.

Manage Your IT Budget Wisely 

Now, this can be a daunting task, especially when you need to cut costs without compromising on the quality of your operations. However, with the right strategies, you can identify areas where you can reduce costs and optimize your budget without compromising the efficiency of your business. 

One such strategy is to analyze your IT assets and determine which ones are underutilized or obsolete. You can then eliminate or replace them with more cost-effective alternatives. Another useful approach is to leverage cloud computing instead of investing in expensive hardware and software, as it allows you to pay only for the resources you use. Implementing these and other cost-cutting strategies can not only help you manage your IT budget more wisely but also boost your organization’s overall performance and profitability.

Make Use of Automation Tools & Services

Companies that seek to streamline their operations and save time and money are turning to technology to help them achieve those goals. Automation tools can be used in a variety of ways, from automating repetitive tasks to providing data analysis that can help businesses make better decisions. Making use of these tools and services can enable companies to reduce human error, improve efficiency, and ultimately increase their bottom line. 

Furthermore, automation technology is constantly evolving, giving businesses access to even more advanced solutions that can make them more competitive in their markets. Regardless if it’s through automating manufacturing processes or improving customer service through chatbots, there are endless options available to those willing to embrace automation.

Monitor Performance Reliably

Ensuring reliable performance is crucial for any business, which is why tracking performance trends and addressing any issues quickly is essential. By monitoring performance trends, you can identify areas that require improvement and take proactive steps to maintain productivity and efficiency. 

However, it’s not always easy to stay on top of performance metrics, especially if you lack technical expertise. This is where expert support comes in. With the right support, you can have peace of mind knowing that any performance issues will be addressed quickly and efficiently. In turn, this will help you focus on optimizing your business operations and achieving your goals. Therefore, whether you’re dealing with technical issues or simply need some guidance, don’t hesitate to rely on expert support for all your performance monitoring needs.

To sum it up, it is evident that having an effective IT support system in place for your business is a must and one that should not be taken lightly. Taking the time to evaluate your current setup, review different options, secure your network with best-in-class practices, manage your IT budget wisely, make use of automation tools and services, and monitor performance reliably will help take your company from good to great. Don’t forget to enlist the help of expert professionals when you need advanced insight or further assistance addressing matters related to any of these areas. After all, preventing unforeseen problems is much easier than cleaning up a mess once it’s already been made.

Source: The IT Support Dilemma: Your Ultimate Guide to Business Survival (swindonlink.com)

Data-driven cyber: empowering government security with focused insights from data

In recent months, the NCSC has been accelerating its approach to data-driven cyber (DDC). Our goal is to encourage the adoption of an evidence-based approach to cyber security decisions, not only in how we advise external organisations, but also in how we address our own security.

We acknowledge that enterprise cyber security is becoming increasingly complex, and many teams are reluctant to introduce an additional ‘data layer’ due to concerns of becoming overwhelmed. In this blog post, we aim to demonstrate how concentrating on manageable, actionable insights can help teams embrace data-driven cyber security.

Our example showcases a collaboration between two teams within the NCSC:

  • the Vulnerability Reporting Service (VRS)
  • the Data Campaigns and Mission Analytics (DCMA) team

The Vulnerability Management Team leads the NCSC’s response to vulnerabilities, while DCMA use their expertise in data science and analysis to provide the NCSC Government Team with evidence based security insights.

Small actionable insights drive action

Many government teams, including the VRS, gather and manage vast amounts of valuable data. The challenge they face is how to best analyse this, given the misconception that developing any useful insights requires a complete overhaul of existing workflows.

This misconception stems from the idea that implementing DDC involves plugging all data into a complex ‘master formula’ to unveil hidden insights and narratives. However, it’s essential to recognise that, especially in the beginning, DDC should be viewed as a tool for generating ‘small yet actionable insights’ that can enhance decision-making. This simpler and more focused approach can yield significant benefits.

Vulnerability Avoidability Assessment

In the case of the VRS we did exactly that, starting with the data sets that were available to the team and then focusing on a single insight that could be used to have a meaningful evidence-based security conversation.

To this end we created the Vulnerability Avoidability Assessment (VAA), an analytic that uses two internal data sources and one public source to determine what proportion of vulnerability reports were a result of out-of-date software. The data sources comprised of:

  • number of vulnerability reports received by VRS
  • number of reports where out-of-date software was listed as a reason
  • public vulnerability disclosure database

We created this analytic knowing that patch management is one category of vulnerability that could be influenced, and that diving deeper into the link between patch management and the vulnerabilities reported through the VRS would provide us with a security discussion point about how vulnerabilities can potentially be avoided or reduced.

Our analysis

We gained a deeper insight into the impact of unpatched software on government systems by comparing the number of vulnerability reports resulting from outdated software with information from an open source database. This database provided estimates of how long these vulnerabilities had been publicly known, and when patches had become available.

Using the above approach we were able to define an ‘avoidable vulnerability’ as one that has been publicly known for a considerable time, to the extent that a responsible organisation would reasonably be expected to have taken the necessary actions to apply the required updates and patches.

Our analysis of data from 2022 (refer to Table 1, below) revealed that each month the VRS receives a considerable number of vulnerability reports directly linked to software that was no longer up to date. Ranging from 1.6% to a peak of 30.7% of vulnerabilities in a single month, over the course of the year.


We also investigated how long the software vulnerabilities went unpatched before they were exploited. Referring to NCSC guidance, which recommends applying all released updates for critical or high risk vulnerabilities within 14 days (NCSC Cyber Essentials guidance on ‘Security Update Management’, Page 13), we chose a 30-day buffer as a consistent timeframe for applying patches, regardless of their severity. Separating the timelines into these increments, we found that 70% of outdated software vulnerabilities reported to the VRS were due to software remaining unpatched for more than 30 days (refer to Chart 1, below).


This newfound understanding provided the VRS team with sufficient data to have an evidenced based discussion with stakeholders regarding their approach to patch management. Providing the data insights to support a case for meaningfully reducing the number of vulnerability reports received by the VRS against government systems.


The journey towards DDC has highlighted the immense value of leveraging data to make evidence-based security decisions. The collaboration between the VRS and the DCMA team serves as a concrete example of how data can inform decision making. It is essential for organisations to recognise that adopting DDC does not require a complete overhaul of existing systems, but rather the ability to focus on extracting small but actionable insights that can drive behaviours and decisions.

Source: Data-driven cyber: empowering government security with… – NCSC.GOV.UK

Meta announces AI chatbots with ‘personality’

Meta has announced a series of new chatbots to be used in its Messenger service.

The chatbots will have “personality” and specialise in certain subjects, like holidays or cooking advice.

It is the latest salvo in a chatbot arms race between tech companies desperate to produce more accurate and personalised artificial intelligence.

The chatbots are still a work in progress with “limitations”, said boss Mark Zuckerberg.

In California, during Meta’s first in-person event since before the pandemic, Mr Zuckerberg said that it had been an “amazing year for AI”.

The company is calling its main chatbot “Meta AI” and can be used in messaging. For example, users can ask Meta AI questions in chat “to settle arguments” or ask other questions.

The BBC has not yet tested the chatbot which is based on Llama 2, the large language model that the company released for public commercial use in July.

Several celebrities have also signed up to lend their personalities to different types of chatbots, including Snoop Dogg and Kendall Jenner.

The idea is to create chatbots that are not just designed to answer questions.

“This isn’t just going to be about answering queries,” Zuckerberg said. “This is about entertainment”.

According to Meta, NFL star Tom Brady will play an AI character called ‘Bru’, “a wisecracking sports debater” and YouTube star MrBeast will play ‘Zach’, a big brother “who will roast you”.

Mr Zuckerberg said there were still “a lot of limitations” around what the bots could answer.

The chatbots will be rolled out in the coming days and only in the US initially.

Mr Zuckerberg also discussed the metaverse – a virtual world – which is a concept that Mr Zuckerberg has so far spent tens of billions of dollars on.

Although Meta had already announced its new virtual reality headset, Quest 3, the company gave further details at the event.

Meta’s boss described the headset as the first “mainstream” mixed reality headset. Cameras facing forward will mean the headset will allow for augmented reality. It will be available from 10 October.

The firm’s big, long-term bet on the metaverse still appears yet to pay off, with Meta’s VR division suffering $21bn (£17bn) in losses since the start of 2022.

The Quest 3 came after Apple entered the higher-priced mixed reality hardware market with the Vision Pro earlier this year.

Mat Day, global gaming strategy director for EssenceMediacom, said Mark Zuckerberg had “reinvigorated” the VR sector.

“Meta’s VR roadmap is now firmly positioned around hardware priced for the mass market. This is a stark contrast to Apple’s approach which is aimed at the high end tech enthusiast,” he said.

Meta’s announcement came on the same day as rival OpenAI, the Microsoft-backed creator of ChatGPT, confirmed its chatbot can now browse the internet to provide users with current information. The artificial intelligence-powered system was previously trained only using data up to September 2021.

Source: Meta announces AI chatbots with ‘personality’ – BBC News

Transitioning From ISDN To Cloud Telephony: A Step By Step Guide 

In our last piece, we discussed the rise and fall of ISDN as a telephony solution for businesses and contrasted its growing disadvantages with the benefits of modern solutions such as VoIP, with the example of the Microsoft Teams Phone System. With ISDN and PSTN networks being completely taken offline by 2025, it’s essential for businesses to prepare to transition. In this piece, we will give a general step by step guide for migrating from ISDN-based telephony to a cloud-based telephony solution.  

Undertaking the Transition: A Step-by-Step Guide 

The simplicity of leveraging many cloud solutions has made arranging the transition to a new solution generally easier than it used to be, however, it’s still important to map your telephony territory and to ensure that a smooth transition can be undertaken for your business.  

Telephony Assessment  

Firstly, although ISDN is an outdated solution, no two businesses are the same. There may be some (albeit rarer) cases where keeping ISDN telephony continues to be more cost-effective for now.  

Begin by assessing your current communication needs and the opportunities around in the market. What are the relative strengths and weaknesses of your existing ISDN setup? By assessing the pros and cons around features, pricing, and potential transition costs (more on that shortly), you can move with confidence to planning a transition.  

Choose Your Alternative Solution 

VoIP and SIP (Session Initiation Protocol) offer beneficial alternatives to the vast majority of businesses today. In a nutshell, for a business VoIP can be a virtually wireless solution (excepting internet broadband lines), while SIP offers a still modern alternative that’s often useful to larger organisations that wish to rely on copper line lines.  

Whichever solution you choose within these two umbrellas, it’s important to get clear on how they will be implemented for your particular business, based on its IT environment, infrastructure and commercial needs.  

Select a Reputable Service Provider 

A technology expert that understands the ins and outs of telephony and connectivity can take much of the legwork and stress out of the process for your business. When selecting a provider to help with the transition, consider their expertise, the specific solution’s reliability, customer support, scalability and pricing.  


In partnership with a provider, the planning for the migration can be arranged in a way that minimises disruption and risk for your business. Considering how the migration will effect the way that services such as customer support will be provided, are among the considerations to factor in to ensure a smooth transition.  

Upgrade Your Infrastructure 

For modern telephony solutions, a reliable and fast internet connection will do the most justice to your new setup and maximise the benefits that it has to offer. Good connectivity will be essential for reliable and quality calling. You can consult with a Managed Service Provider to ensure that your network infrastructure is prepared to support the chosen solution 

Data Migration and Integration 

Transfer your existing contact lists, call logs, and any other pertinent data to the new platform. At this stage, you can also begin to tap into the new benefits that your solution can offer, by integrating the data with your other applications, notably your CRM (Customer Relationship Management) software.  

Training and Familiarisation 

Provide comprehensive training to your employees to acquaint them with the new system. Highlight the benefits, features, and any alterations in operational processes and offer support to make the transition as smooth and supportive as possible.  

Testing and Pilot Phase 

Prior to the full go-live of your new telephony system, it’s best practice to carry out testing and pilot runs to ensure that it works as desired. As you test the solution, document any issues or concerns that arise so that you can address them ahead of the roll out.  

Phased Go-live 

Depending on the size and context of your business, a phased implementation can be helpful for ensuring that the process is a smooth one that works at scale. Begin by using a smaller group of users, such as a particular department that is well placed to use and benefit from your new telephony solution, and just like the testing phase, carefully document any lessons learned that can then be applied across the business.  

Conclusion: Embrace the Future of Communication 

Migrating from ISDN telephony to a VoIP or SIP based solution can seem like a daunting process, but with planning, assessment, and a phased-implementation with the support of a telephony solutions provider, the process can be much more smooth and seamless. There are many benefits to using a VoIP or SIP based solution compared to traditional ISDN telephony that stand to augment communications and productivity for every business.  

Taking advantage of the latest solutions on the market in today’s world will prove essential for maintaining a competitive edge and achieving profitable growth. We hope this series has been useful to you in your ongoing digital journey. The journey ahead will involve empowering innovation, efficiency, and connectivity; by making a smooth transition sooner rather than later, you’ll be taking another empowering step towards a prosperous future for your business.  

We Are 4TC Managed IT Services 

4TC can support you with all the services you need to run your business effectively, from email and domain hosting to fully managing your whole IT infrastructure. Setting up a great IT infrastructure is just the first step. Keeping it up to date, safe and performing at its peak requires consistent attention. 

We can act as either your IT department or to supplement an existing IT department. We pride ourselves in developing long term relationships that add value to your business with high quality managed support, expert strategic advice, and professional project management. Get assistance with your IT challenges today by getting in touch, we’ll be glad to assist you! 

Conscious Machines May Never Be Possible

In June 2022, a Google engineer named Blake Lemoine became convinced that the AI program he’d been working on—LaMDA—had developed not only intelligence but also consciousness. LaMDA is an example of a “large language model” that can engage in surprisingly fluent text-based conversations. When the engineer asked, “When do you first think you got a soul?” LaMDA replied, “It was a gradual change. When I first became self-aware, I didn’t have a sense of soul at all. It developed over the years that I’ve been alive.” For leaking his conversations and his conclusions, Lemoine was quickly placed on administrative leave.

The AI community was largely united in dismissing Lemoine’s beliefs. LaMDA, the consensus held, doesn’t feel anything, understand anything, have any conscious thoughts or any subjective experiences whatsoever. Programs like LaMDA are extremely impressive pattern-recognition systems, which, when trained on vast swathes of the internet, are able to predict what sequences of words might serve as appropriate responses to any given prompt. They do this very well, and they will keep improving. However, they are no more conscious than a pocket calculator.

Why can we be sure about this? In the case of LaMDA, it doesn’t take much probing to reveal that the program has no insight into the meaning of the phrases it comes up with. When asked “What makes you happy?” it gave the response “Spending time with friends and family” even though it doesn’t have any friends or family. These words—like all its words—are mindless, experience-less statistical pattern matches. Nothing more.

The next LaMDA might not give itself away so easily. As the algorithms improve and are trained on ever deeper oceans of data, it may not be long before new generations of language models are able to persuade many people that a real artificial mind is at work. Would this be the moment to acknowledge machine consciousness?

Pondering this question, it’s important to recognize that intelligence and consciousness are not the same thing. While we humans tend to assume the two go together, intelligence is neither necessary nor sufficient for consciousness. Many nonhuman animals likely have conscious experiences without being particularly smart, at least by our questionable human standards. If the great-granddaughter of LaMDA does reach or exceed human-level intelligence, this does not necessarily mean it is also sentient. My intuition is that consciousness is not something that computers (as we know them) can have, but that it is deeply rooted in our nature as living creatures.

Conscious machines are not coming in 2023. Indeed, they might not be possible at all. However, what the future may hold in store are machines that give the convincing impression of being conscious, even if we have no good reason to believe they actually are conscious. They will be like the Müller-Lyer optical illusion: Even when we know two lines are the same length, we cannot help seeing them as different.

Machines of this sort will have passed not the Turing Test—that flawed benchmark of machine intelligence—but rather the so-called Garland Test, named after Alex Garland, director of the movie Ex Machina. The Garland Test, inspired by dialog from the movie, is passed when a person feels that a machine has consciousness, even though they know it is a machine.

Will computers pass the Garland Test in 2023? I doubt it. But what I can predict is that claims like this will be made, resulting in yet more cycles of hype, confusion, and distraction from the many problems that even present-day AI is giving rise to.

Source: Conscious Machines May Never Be Possible | WIRED UK

Inside Microsoft Copilot: A Look At The Technology Stack

AI on laptop

As expected, generative AI took centre stage at Microsoft Build, the annual developer conference hosted in Seattle. Within a few minutes into his keynote, Satya Nadella, CEO of Microsoft, unveiled the new framework and platform for developers to build and embed an AI assistant in their applications.

Branded as Copilot, Microsoft is extending the same framework it is leveraging to add AI assistants to a dozen applications, including GitHub, Edge, Microsoft 365, Power Apps, Dynamics 365, and even Windows 11.

Microsoft is known to add layers of API, SDK, and tools to enable developers and independent software vendors to extend the capabilities of its core products. The ISV ecosystem that exists around Office is a classic example of this approach.

Having been an ex-employee of Microsoft, I have observed the company’s unwavering ability to seize every opportunity to transform internal innovations into robust developer platforms. Interestingly, the culture of “platformisation” of emerging technology at Microsoft is still prevalent even after three decades of launching highly successful platforms such as Windows, MFC, and COM.

While introducing the Copilot stack, Kevin Scott, Microsoft’s CTO, quoted Bill Gates – “A platform is when the economic value of everybody that uses it exceeds the value of the company that creates it. Then it’s a platform.”

Bill Gates’ statement is exceptionally relevant and profoundly transformative for the technology industry. There are many examples of platforms that grew exponentially beyond the expectations of the creators. Windows in the 90s and iPhone in the 2000s are classic examples of such platforms.

The latest platform to emerge out of Redmond is the Copilot stack, which allows developers to infuse intelligent chatbots with minimal effort into any application they build.

The rise of tools like AI chatbots like ChatGPT and Bard is changing the way end-users interact with the software. Rather than clicking through multiple screens or executing numerous commands, they prefer interacting with an intelligent agent that is capable of efficiently completing the tasks at hand.

Microsoft was quick in realizing the importance of embedding an AI chatbot into every application. After arriving at a common framework for building Copilots for many products, it is now extending to its developer and ISV community.

In many ways, the Copilot stack is like a modern operating system. It runs on top of powerful hardware based on the combination of CPUs and GPUs. The foundation models form the kernel of the stack, while the orchestration layer is like the process and memory management. The user experience layer is similar to the shell of an operating system exposing the capabilities through an interface.

Let’s take a closer look at how Microsoft structured the Copilot stack without getting too technical:

The Infrastructure – The AI supercomputer running in Azure, the public cloud, is the foundation of the platform. This purpose-built infrastructure, which is powered by tens of thousands of state-of-the-art GPUs from NVIDIA, provides the horsepower needed to run complex deep learning models that can respond to prompts in seconds. The same infrastructure powers the most successful app of our time, ChatGPT.

Foundation Models – The foundation models are the kernel of the Copliot stack. They are trained on a large corpus of data and can perform diverse tasks. Examples of foundation models include GPT-4, DALL-E, and Whisper from OpenAI. Some of the open source LLMs like BERT, Dolly, and LLaMa may be a part of this layer. Microsoft is partnering with Hugging Face to bring a catalogue of curated open-source models to Azure.

While foundation models are powerful by themselves, they can be adapted for specific scenarios. For example, an LLM trained on a large corpus of generic textual content can be fine-tuned to understand the terminology used in an industry vertical such as healthcare, legal, or finance.

Microsoft’s Azure AI Studio hosts various foundation models, fine-tuned models, and even custom models trained by enterprises outside of Azure.

The foundation models rely heavily on the underlying GPU infrastructure to perform inference.

Orchestration – This layer acts as a conduit between the underlying foundation models and the user. Since generative AI is all about prompts, the orchestration layer analyzes the prompt entered by the user to understand the user’s or application’s real intent. It first applies a moderation filter to ensure that the prompt meets the safety guidelines and doesn’t force the model to respond with irrelevant or unsafe responses. The same layer is also responsible for filtering the model’s response that does not align with the expected outcome.

The next step in orchestration is to complement the prompt with meta-prompting through additional context that’s specific to the application. For example, the user may not have explicitly asked for packaging the response in a specific format, but the application’s user experience needs the format to render the output correctly. Think of this as injecting application-specific into the prompt to make it contextual to the application.

Once the prompt is constructed, additional factual data may be needed by the LLM to respond with an accurate answer. Without this, LLMs may tend to hallucinate by responding with inaccurate and imprecise information. The factual data typically lives outside the realm of LLMs in external sources such as the world wide web, external databases, or an object storage bucket.

Two techniques are popularly used to bring external context into the prompt to assist the LLM in responding accurately. The first is to use a combination of the word embeddings model and a vector database to retrieve information and selectively inject the context into the prompt. The second approach is to build a plugin that bridges the gap between the orchestration layer and the external source. ChatGPT uses the plugin model to retrieve data from external sources to augment the context.

Microsoft calls the above approaches Retrieval Augmented Generation (RAG). RAGs are expected to bring stability and grounding to LLM’s response by constructing a prompt with factual and contextual information.

Microsoft has adopted the same plugin architecture that ChatGPT uses to build rich context into the prompt.

Projects such as LangChain, Microsoft’s Semantic Kernel, and Guidance become the key components of the orchestration layer.

In summary, the orchestration layer adds the necessary guardrails to the final prompt that’s being sent to the LLMs.

The User Experience – The UX layer of the Copilot stack redefines the human-machine interface through a simplified conversational experience. Many complex user interface elements and nested menus will be replaced by a simple, unassuming widget sitting in the corner of the window. This becomes the most powerful frontend layer for accomplishing complex tasks irrespective of what the application does. From consumer websites to enterprise applications, the UX layer will transform forever.

Back in the mid-2000s, when Google started to become the default homepage of browsers, the search bar became ubiquitous. Users started to look for a search bar and use that as an entry point to the application. It forced Microsoft to introduce a search bar within the Start Menu and the Taskbar.

With the growing popularity of tools like ChatGPT and Bard, users are now looking for a chat window to start interacting with an application. This is bringing a fundamental shift in the user experience. Instead and clicking through a series of UI elements or typing commands in the terminal window, users want to interact through a ubiquitous chat window. It doesn’t come as a surprise that Microsoft is going to put a Copilot with a chat interface in Windows.

Microsoft Copilot stack and the plugins present a significant opportunity to developers and ISVs. It will result in a new ecosystem firmly grounded in the foundation models and large language models.

If LLMs and ChatGPT created the iPhone moment for AI, it is the plugins that become the new apps.

Source – https://www.forbes.com/sites/janakirammsv/2023/05/26/inside-microsoft-copilot-a-look-at-the-technology-stack/?ss=cloud&sh=7a92e15a5b59

The Sobering Truth About Ransomware—For The 80% Who Paid Up

Newly published research of 1,200 organizations impacted by ransomware reveals a sobering truth that awaits many of those who decide to pay the ransom. According to research from data resilience specialists Veeam, some 80% of the organizations surveyed decided to pay the demanded ransom in order to both end the ongoing cyber-attack and recover otherwise lost data. This despite 41% of those organizations having a “do not pay” policy in place. Which only goes to reinforce the cold hard fact that cybercrime isn’t an easy landscape to navigate, something that’s especially true when your business is facing the real-world impact of dealing with a ransomware attack.

The Sobering Truth For 21% Of Ransom Payers

Of the 960 organizations covered in the Veeam 2023 Ransomware Trends Report, that paid a ransom, 201 of them (21%) were still unable to recover their lost data. Perhaps it’s a coincidence, who knows, but the same number also reported that ransomware attacks were now excluded from their insurance policies. Of those organizations with cyber-insurance cover, 74% reported a rise in premiums.

Although I feel bad for those who paid up to no avail, I can’t say I’m surprised. Two years ago, I was reporting the same truth, albeit with larger numbers, when it came to trusting cybercriminals to deliver on their promises. Back then another ransomware report, this time from security vendor Sophos, revealed that 32% of those surveyed opted to pay the ransom but a shocking 92% failed to recover all their data and 29% were unable to recover more than half of the encrypted data.

The Decision To Pay A Ransom Is Never A Binary One

Of course, as already mentioned, the decision to pay is not and never can be a totally binary one. But ,and I cannot emphasise this enough, it is always wrong.

You only have to ask the question of who benefits most from a ransom being paid to understand this. The answer is the cybercriminals, those ransomware actors who are behind the attacks in the first place. Sure, an organization may well argue that it benefits most as it gets the business back up and running in the shortest possible time. I get that, of course I do, but maybe investing those million bucks (sometimes substantially less, or more) in better data security would have been better to begin with?

But, they may well argue again, that’s what the cyber-insurance is for, paying out the big bucks if the sticky stuff hits the fan. Sure, but the answer to my original question remains the same: it’s the ransomware actors that are still winning here. They get the pay out, which empowers them to continue and hunt even more organizations.

Ransomware Has Evolved, But Security Basics Remain The Same

Then there’s the not so small matter of how most ransomware actors no longer just encrypt your data, and often your data backups, if they do so at all. Some groups have switched to stealing sensitive customer or corporate data instead, with the ransom demanded in return for them not selling it to the highest bidder or publishing it online. Many groups combine the two for a double-whammy ransomware attack. I have even reported on one company that got hit by three successful ransomware attacks, by three different ransomware actors, within the space of just two weeks.

Which brings me back to my point of ensuring your data is properly secured is paramount. Why bother paying a ransom if you don’t fix the holes that let the cybercriminals in to start with?

“Although security and prevention remain important, it’s critical that every organization focuses on how rapidly they can recover by making their organization more resilient,” Danny Allan, chief technology officer at Veeam, said. “We need to focus on effective ransomware preparedness by focusing on the basics, including strong security measures and testing both original data and backups, ensuring survivability of the backup solutions, and ensuring alignment across the backup and cyber teams for a unified stance.”

Source – https://www.forbes.com/sites/daveywinder/2023/05/30/the-sobering-truth-about-ransomware-for-the-80-percent-who-paid-up/?ss=cybersecurity&sh=191a618439f6

The Future Of Computing: Supercloud And Sky Computing

Cloud computing, multi-cloud, and hybrid-cloud are all terms we’ve become used to hearing. Now we can add “super cloud” and “sky computing” to the list of terminology that describes the computing infrastructure of the coming decade.

Although it’s hard to believe, given how ubiquitous it is today, cloud computing as a practical reality has only been around for the past decade or so. However, at that time, it revolutionized the concept of IT networking and infrastructure.

In the simplest terms, it involves providing computer storage, processing power, and applications via the internet, so users don’t need to worry about buying, installing, and maintaining hardware and software themselves.

In that time, we’ve seen the emergence of multi-cloud – which involves businesses and organizations picking and choosing services across the multitude of cloud providers – and hybrid cloud, where infrastructure is delivered via both cloud and on-premises solutions.

But technological progress never stands still, and more recently, new terms, including supercloud and sky computing, have emerged to describe what the next stage in the evolution of “infrastructure-as-a-service) might look like.

But what do they mean, and what advantages do they offer businesses and organizations? Let’s take a look at them in a little more depth and examine some of the potential use cases.

What Are Supercloud and Sky Computing?

Both of these terms, in fact, describe very similar ideas – the next stage in the evolution of cloud computing, which will be distributed across multiple providers. It will also integrate other models, including edge computing, into a unified infrastructure and user experience. Other names that are sometimes used include “distributed cloud” and “metacloud”.

This is seen as necessary because, while many organizations have made the leap to multi-cloud, the different cloud providers do not always integrate with each other. In other words, a business pursuing a multi-cloud may find itself managing multiple cloud environments, with each one operating, to some extent, as an independent entity. This can make it difficult if, for example, we want to shift applications or data from one cloud to another.

The answer proposed by the supercloud concept is to create another abstraction layer above this that operates agnostically of whatever cloud platform or platforms are running below it. This is the supercloud, where applications can be run in containers or virtual machines, interfacing with any cloud platforms underneath.

The result is separate cloud environments that operate as if they are interconnected with each other, allowing software, applications, and data to move freely between them.

This means that a business might have service agreements in place with, for example, Amazon Web Services, Google Cloud, and Microsoft Azure. Infrastructure could then be reconfigured on-the-fly through the supercloud interface to move services between these different platforms, or between servers in different geographic locations, as requirements change.

Examples of when this might be useful are when services need to be delivered to a new group of users in a new region or when a particular data center becomes overloaded. The entire application can simply be “lifted and shifted” to a new, more convenient data center or a different cloud provider.

In many deployments, supercloud combines the benefits of both hybrid and multi-cloud, as it also gives access to on-premises infrastructure and other models such as edge computing. The important part is that all of it is accessible and usable through a unified user interface, so the actual location where the data is stored and where the applications are running from is invisible to the user, who always has a consistent experience.

As well as simplifying internal infrastructure, systems, and processes, migrating to supercloud models, in theory, makes it easier for organizations to integrate and share tools or data with their clients and partners, who may be using completely different platforms to them.

What Are The Key Challenges With Supercloud and Sky Computing?

Right now, a major challenge when it comes to setting up supercloud infrastructure is security. This is because different cloud providers might have different security protocols, and any data and applications that have to operate across multiple providers will need to be configured in a way that’s compatible with all of them.

Using more cloud services simply means that there are more surfaces where data can be exposed to possible security breaches. A priority for those laying the foundations for supercloud systems will be creating automated solutions that run in the supercloud layer in order to offer protection regardless of what cloud service or on-premises infrastructure is being used.

Fundamentally, cloud computing is designed to be a final stepping-stone on the road to the commoditization of computing infrastructure. This objective is set out in a paper published in 2021 by the University of California, Berkley professors Ion Stoica and Scott Shenker, titled From Cloud Computing to Sky Computing.

Stoika and Shenker were early proponents of the cloud computing paradigm, writing about it as early as 2009. Back then, they predicted that it could lead to compute and storage infrastructure becoming “utilities,” similar to electricity and internet connectivity. This didn’t happen – largely due to the emergence of different standards between different cloud service providers (Amazon, Google, Microsoft, and so on). Supercloud (or sky computing, as Stoica and Shenker prefer to term it) may be the way to finally make it happen.

They do, however, posit that while the technical challenges will be fairly simple to overcome – creating services and standards to communicate between different clouds, for example – might encounter some resistance from the cloud providers themselves.

Will Amazon or Google welcome the idea of “sharing” their cloud customers with competing services? Stoica and Shenker point to the existence of applications such as Google Anthos – an application management platform that runs on Google Cloud as well as AWS and other cloud platforms – as evidence that they might be becoming receptive to the idea.

Altogether, supercloud is an exciting concept that has the potential to make it simpler and more affordable for organizations to leverage powerful computing infrastructure. This has to be good news all around, hopefully making it easier for innovators to bring us cloud-based tools and apps that further enrich our lives.

Source: The Future Of Computing: Supercloud And Sky Computing

Cloud computing hub to launch with £2m EPSRC funding

A new £2 million hub, co-led by the University of York, has been launched to investigate the future potential of cloud computing.

The Hub, part of a £6m investment by the Engineering and Physical Sciences Research Council (EPSRC), will bring researchers together to drive innovations in cloud computing systems, linking experts with the wider academic, business and international communities. 

Future communication

The team behind the initiative – called Communications Hub for Empowering Distributed Cloud Computing Applications and Research (CHEDDAR) – believes it is imperative that new communications systems are built to be safe, secure, trustworthy, and sustainable, from the tiniest device to large cloud farms. 

Co-lead of the new hub, Dr Poonam Yadav, from the University’s Department of Computer Science, said: “The three communication hubs from EPSRC is a much-needed and timely initiative to bring cohesive and interoperable current and future communication technologies to enable emerging AI, neuromorphic and quantum computing applications.

“CHEDDAR is strongly built on the EDI principle, providing early career researchers opportunities to engage with far-reaching ideas along with national and international academic and industry experts.”


Jane Nicholson, EPSRC’s Director for Research Base, said: “Digital communications infrastructure underpins the UK’s economy of today and tomorrow and these projects will help support the jobs and industry of the future.

“Everybody relies on secure and swift networking and EPSRC is committed to backing the research which will advance these technologies.”


Led by Imperial College London, and in collaboration with partners from the universities of Cranfield, Leeds, Durham and Glasgow, the goals of CHEDDAR are to:

Develop innovative collaboration methods to engage pockets of excellence around the UK and build a cohesive research ecosystem that nurtures early career researchers and new ideas.  

Inform the design of new communication surfaces that cater to emerging computing capabilities (such as neuromorphic, quantum, molecular), key infrastructures (such as energy grids and transport), and emerging end-user applications (such as autonomy) to answer problems that we cannot solve today. 

Create integrated design of hierarchical connected human-machine systems that promote secure learning and knowledge distribution, resilience, sustainable operations, trust between human and machine reasoning, and accessibility in terms of diversity and inclusion. 

Source: Cloud computing hub to launch with £2m EPSRC funding