This week:

3 – Meta found ‘covertly tracking’ Android users through Instagram and Facebook

2 – Don’t block AI, but adopt it with eyes wide open

1 – Just because I’m biased, doesn’t mean I’m wrong

 


 

3 – Meta found ‘covertly tracking’ Android users through Instagram and Facebook

“The scripts bypassed Android’s security measures and meant that Meta and Yandex could track what users were doing on web browsers, without the user consenting or even knowing”

Source: Sky News

 

What’s the story?

Meta (and Yandex, a Russian search engine) have been caught using a covert method to track Android users’ web browsing activity without their consent, even when users were in incognito mode or using a VPN. Using an unusual technique, Meta was able to link users’ web browsing sessions to their Facebook or Instagram identities, bypassing typical privacy protections such as incognito mode, cookie clearing, and Android’s app permission system.

When asked about this, Google said Meta had used Android’s capabilities “in unintended ways that blatantly violate our security and privacy principles“.

Meta’s top-class creative writers responded by saying Meta is now in discussions with Google to address “a potential miscommunication regarding the application of [Google’s] policies.

 

So what?

It should as a shock to no-one that Meta is always looking for new and innovative ways to track what we are doing online.

As last week’s story shows, this is not ‘just’ about ads. It’s about mass surveillance.

 


 

2 – Don’t block AI, but adopt it with eyes wide open

“[Resist] the urge to adopt AI tools simply because they’re popular. Risk should drive implementation – not the other way around.”

Source: Computer Weekly

 

What’s the story?

This ‘Think Tank’ piece in ComputerWeekly advises organisations not to obstruct AI adoption but to manage its use with clear policies and education. It highlights a number of risks, including

  1. Data disclosure: Confidential data being loaded into an AI platform, resulting in it becoming available to a wider audience than expected. 
  2. Data poisoning: Malicious upload of data into an AI tool so its future outputs are ‘poisoned’ (e.g. unreliable or wrong).
  3. Excessive trust: Not checking the accuracy of AI outputs.

Addressing the risks requires tailored training, defining rules on what can be used and how it can be used, and embedding AI rules and security measures into the organisation’s existing security & risk management frameworks.

 

So what? 

We must accept that AI is here. And whether we like it or not, we also must accept that it is being used by our employees.

If we don’t give them clear rules and guidelines on what they can and cannot do with AI, don’t be surprised when their use of AI results in a security incident.

 


 

1 – Just because I’m biased, doesn’t mean I’m wrong

“Instead of hiring professional experts or outsourcing to a Managed Security Service Provider (MSSP), 74% of SMBs either self-manage their cybersecurity or rely on friends or family members who lack the necessary expertise.”

Source: Cloud Security Alliance

 

What’s the story?

A recent report by the Cloud Security Alliance highlights the severe cyber security risks faced by small and medium-sized businesses (SMBs).

Key findings include:

  • 55% of SMBs could close following a cyberattack causing losses of $50,000 or less.
  • 80% of SMBs acknowledged that they have significant security gaps
  • 74% self-manage cybersecurity or rely on friends or family members who lack the necessary expertise.
  • 23% of owners admit they don’t fully understand their cybersecurity risks and 26% acknowledge that the person managing their security lacks proper training.

The report emphasises the need for SMBs to adopt stronger cybersecurity measures, including partnering with Managed Security Service Providers (MSSPs), to reduce the likelihood and impact of a cyber attack.

 

So what?

I help small and medium sized organisations (primarily financial services firms, IT / SaaS service providers, and charities) to assess and improve the strength of their security measures.

So, I am biased when I publicise surveys like this one.

But just because I’m biased, doesn’t mean I’m wrong.