Covid Testing in the Workplace

Covid testing in the workplace – can we do it?

On 29 March, lockdown restrictions eased and the “stay at home” message turned to “stay local”. Earlier in the year, the Government also extended its Covid workplace rapid testing programme to all businesses, regardless of employee numbers. With this context in mind we are looking ahead to the potential reduction in temporary work from home arrangements and thinking about how testing can be managed in compliance with the GDPR.

What do employers need to consider?

Given that the workplace rapid testing programme is free, it is likely that a lot of businesses will be thinking about whether or not to deploy it in their organisation. As a responsible employer there are probably a few things you will be thinking about from a data protection perspective:

  1. How do we find out whether or not an employee is displaying symptoms?
  2. Can we add Covid test results to a personnel record?
  3. If we decide to apply for rapid tests, can we ask employees to take one?
  4. Can we share the test results?

Where do we start?

If you considering making tests available to your employees, you should be clear about what you are seeking to achieve by doing so and if collecting special category data is necessary for that purpose. This means you should be thinking about:

  • Do we really need this information?
  • If we do not have this information, what would the consequences be?
  • Do we need this information for all employees or just some?
  • What is the minimum amount of data we need for the purpose?
  • Are there any alternatives to collecting the information (e.g. can employees continue to work from home instead)?
  • Will the working environment be safer if we had this information?

Conducting a Data Protection Impact Assessment will help here; it will both consolidate your thinking and demonstrate your accountability. The DPIA needs to be done before you start the processing and it should be kept up to date as/when/if things change.

It is necessary – what next?

As health data is special category data, you need both a legal basis for processing and a separate condition for processing.

  • Legal basis
    • Public authorities: the legal basis is probably going to be “public task”
    • All other organisations: It is pretty well established that consent is a difficult legal basis for employers to rely upon; this is because of the imbalance in the relationship – it could be said that an employee is unable to give freely given consent. So, it is likely you will be considering “legitimate interests”.
  • Condition for processing
    • There are probably two Article 9 conditions you will be looking at:
      • Article 9(2)(b) – the processing is necessary for the purposes of carrying out your obligations as an employer, or
      • Article 9(2)(i) – the processing is necessary for reasons of public interest in the area of public health.
    • Both of these conditions also require you to apply a condition under Schedule 1 of the Data Protection Act 2018.

Now what?

Your DPIA has clarified the extent of the personal data you need to collect and confirmed how the activity can proceed, and you have determined your legal basis. You now need to think about how to communicate the activity to your staff. Here you should be addressing:

  • How do I communicate clearly and effectively what our purpose is here and why we think it is necessary?
  • Would it be appropriate to explain our decision making?
  • How long are we going to keep the information and how can staff get access to it in the future?
  • What if our staff have concerns; how are we going to assuage those and ensure they are empowered to exercise their privacy rights?

If you are open and honest with your employees in the privacy information you provide to them, not only will you be satisfying your transparency obligations under the GDPR, you may also find that they will be more likely to want to participate in the activity.

We have started testing; what can we do with the results?

Provided that you think it is necessary and proportionate to do so, you will probably store the results against everyone’s personnel file. But you need to be mindful that they may take tests again in the future (of their own volition or through additional workplace testing) which means being particularly aware of your obligation to keep their records up to date.

You have a responsibility under health and safety legislation to report cases relating to Covid-19 in the workplace. You will also need to inform your relevant local authorities when there are two or more cases confirmed as this is deemed to be an outbreak. In all cases of data sharing, you should:

  • only share what is necessary and proportionate,
  • always be clear on your legal basis for sharing,
  • ensure everything is documented for future reference (e.g. in your record of processing activities).

Seek advice

Get in touch now if you need more detailed advice and support in this area. You can also find more information on the ICO’s hub (found here). In terms of employment law specifically, this is a complex area and we strongly advise that organisations take advice from ACAS and/or seek specialist legal advice.

PECR Draft Update

Hungry for more? Cookies, Marketing and the draft Privacy Regulation

Discussion around updating the current ePrivacy Directive – which the UK applies via the Privacy and Electronic Communication Regulations (PECR) started in early 2017.

It was meant to come into force at the same time as the GDPR in May 2018. And like the GDPR, it will be a Regulation – so will apply directly to EU Member States, rather than requiring the adoption of (potentially conflicting) national laws.

After many drafts, we have the latest. This is from the Portuguese presidency, and came in early 2021. See the new draft. Here’s what we learnt.

 Before we start…Brexit

Any change is unlikely to enter into force before 2023. And this only happens if Member States agree on a final version in the next few months.

And will the UK adopt it? The Government is already making noises that we will both maintain the GDPR’s high standards but that “we do not need to copy and paste the EU’s rule book…”

The Brexit Agreement commits both sides to upholding “high standards of data protection” and the UK is keen to secure the Adequacy Decision.

So, the answer is…who knows, at this time. But one thing to consider. The draft Regulation has the same “territorial scope” provisions as the GDPR: it will apply to people “who are in the Union.” So, if you place cookies on the devices of data subjects in the EU and/or market to them, it would apply anyway.

 Overall idea

The draft seeks to align the rules around cookies and marketing to the GDPR. The concept of accountability, so central to the GDPR, is therefore pushed to the forefront.

You need to know, and be able to demonstrate you know, what you are doing when you decide to place cookies on peoples’ devices…and also whether you should let other companies (like Facebook) place cookies too.

 Direct Marketing

The draft proposes little change to the current position. The following remain as now:

  • You need prior consent to send Direct Marketing via email, or SMS, or make automated calls.
  • Live calls to numbers not on the TPS/CTPS can still be made on the basis of legitimate interests.
  • The soft-opt in stays – the only change being that the UK could define a cut off point after a sale when you would have to stop sending messages.
  • You can still send B2B messages (i.e. to people in their working, professional life) on the basis of legitimate interests.

The only major change is that the definition of what’s covered by “electronic message” is expanded to include “functionally equivalent applications and techniques”

  • This means that messages sent via Whatsapp, Messenger and other “over the top” services will now be covered, the same as email / SMS.

 Cookies

The draft provides a helpful update in how to approach cookies. It follows the GDPR method of first stating what you cannot do, and then outlining when it is possible.

So, the draft says you are prohibited from placing a cookie on a person’s device unless you can find a condition to justify doing so.

There are six conditions for when the placing of a cookie is necessary. This list consolidates the current times when cookies don’t need consent – i.e. the “communication” and “strictly necessary” exemptions in the PECR (and its amendments):

  1. For carrying out transmission of an electronic communication
  2. For providing a service specifically requested by the end-user
  3. For audience measurements, subject to certain conditions
  4. To maintain or restore the security, prevent fraud or detect technical faults
  5. For a software update, subject to conditions, or
  6. To locate their device when the person makes an emergency call.

Of most interest are #2 – e.g. the contract someone signs might require that a cookie be placed to provide a service they have requested or bought from you – and #3 – you can use cookies to collect audience data (e.g. on numbers of visitors to certain pages of your website).

But this doesn’t cover when you want to go that step further and determine who is using the site (i.e.. when you want to single out users and track them).

This, like other non-necessary cookies, will, as now, require consent.

Finally, there is a new condition, mirroring the GDPR concept of purpose compatibility: you can place a cookie if it is necessary for a purpose that is sufficiently compatible with the original purpose for which you placed the cookie in the first place.

 Consent using browser / software settings

The draft recognises that we are often requested to provide consent to cookies, due to the ubiquitous use of tracking cookies and similar tracking technologies.

This often means we’re overloaded with requests and stop reading the cookie banners…meaning the protection offered by consent is undermined.

To address this, the draft proposes the concept of providing consent via technical settings on your browsers or other software application. i.e. you can grant, through software settings, consent to a specific provider for the use of cookies for one or multiple purposes across one or more services of that provider.

So, watch this space to see if software companies take to this idea, and start developing their browsers.

Can you make access to your website dependent on the person giving you consent for non-necessary cookies?

Yes. And no. It depends if you giving the person a genuine choice:

  • Yes: If you provide clear, user-friendly information about the purposes of cookies and there is an equivalent offer that does not involve consenting to non-necessary cookies.
  • No: If there a clear imbalance between them and you, such as there being only few or no alternatives to the service (i.e. no real choice) or you’re a provider in a dominant position.

The elephant in the room – the personal data generated by the Cookies you place

PECR, and the draft Regulation, define the rules for when you can place a cookie on a device. They do not contain any specific rule for doing anything with the data subsequently generated by the cookie.

Cookies generate data. Some of it is personal data. A recital in the draft quietly notes this: “The information collected from [someone’s device] can often contain personal data.” (recital 20).

The ICO’s Guidance on the use of cookies and similar technologies from July 2019 has a section called “Do the rules apply to the processing of personal data gained via cookies?”

Yet thus fair, the focus has been on cookie banners and consent for placing the cookie, with little (if any) consideration of the lawful basis for processing the data generated by the cookie.

The ICO’s Guidance notes in an example “Tracking and profiling for direct marketing and advertising” that

“consent would be required for processing like tracking and profiling for purposes of direct marketing, behavioural advertisement, data-brokering, location-based advertising or tracking-based digital market research due to the nature of the processing operations and the risks posed to individuals.

“…in most circumstances, legitimate interests is not considered to be an appropriate lawful basis for the processing of personal data in connection with profiling and targeted advertising.”

Yet the vast, overriding majority of organisations currently just about obtain consent for the cookie…and don’t get the second consent for the processing personal data generated by cookies for these purposes. They instead rely on legitimate interests (without quite realising they are doing so).

So, just like the shift from the DPA 1998 to the GDPR, the shift from the PECR to the new PECR will be very likely have the same underlying message: we’ve known the rules for years; we know what we should be doing…now we really must do what the law requires and be accountable for it.

Brexit Update

The end is in sight – and the UK might be adequate!

On 31st December 2020, the Brexit transition period ended. From 1st January 2021, the UK became a third country for the purposes of the EU GDPR.

The Brexit Agreement essentially gave the UK and EU six months to try and reach a long-lasting agreement. It said that;

  1. The UK would not be regarded as a third country until 1st May 2021: personal data could continue to flow between the EU and UK, and
  2. During this period, the EU would consider granting the UK an adequacy decision.

This would essentially mean the flows of data between the EU and UK could continue as now; the EU would regard the UK as a country with “adequate” data protection laws.

Six months might not seem a long time to assess the UK and reach a decision. But given that EU law has shaped the UK’s data protection laws for decades (rather than the normal process, whereby the EU uses the adequacy assessment to try and bring a country’s law and data protection regime into line with the GDPR) it seemed possible.

An announcement within 2 months

And so it was! The EU announced on 19th February that a draft adequacy decision had been presented to the European Data Protection Board (EDPB) for consideration.

It is widely hoped that the EDPB will give the green light; EDPB approval will mean that the adequacy decision can then be presented to EU Member States. If there is approval at this final stage, the UK will have its adequacy decision.

This would remove the need for EU organisations transferring data to the UK to scramble and search for alternative safeguards, like Standard Contractual Clauses. They can simply continue as before.

What about flows the other way (from the UK to EU)?

Personal data can continue to flow from the UK to the EU and countries where the UK has said the country has achieved UK “adequacy regulations” (i.e. the UK’s version of “adequacy decisions” which are currently one in the same thing). At the present time, the UK simply adopted all the EU adequacy decisions in place as of 31st December (i.e. the UK mirrors the EU regime).

What about the future?

It remains to be seen what happens if the UK starts to break away from EU precedent. The upcoming update to the ePrivacy rules may soon provide a test: the Brexit Agreement commits both the UK and EU to upholding “high standards of data protection” but in theory, the UK will not have to adopt the ePrivacy rules. Will the UK want to maintain parity with its European counterparts or come up with its own “dynamic” update of ePrivacy rules?

Both sides have confirmed that they will keep adequacy decisions/regulations “under review”. But for the immediate future, we may be close to a period of stability. We’ll keep you updated!

Seek advice

Protecture has been advising a number of clients on their obligations under the EU and UK GDPR post-Brexit, both in terms of their obligations in respect of personal data transfers and their duty to appoint an EU representative. Get in touch now for advice and support on these complex issues.

The Serious Side of the 6.2cm-tall man

Who is the 6.2cm tall man?

The story of Liam Thorp, a 32-year-old man with no underlying health conditions being offered a Covid vaccine early because his GP surgery thought he had a body mass index (BMI) of 28,000 made national headlines.

The data protection implications of this case have been unreported but are important to consider.

The inaccurate personal data could have led to someone receiving a jab before someone else more in need. Therefore, there could have been practical health implications for that person if their access to a jab was delayed because someone else received their jab in error.

For all organisations, the collection, use, and maintenance of accurate personal data can be critical to operating efficiently: whether you are providing services to people, recording their preferences, Giftaid status, or simply their contact details.

 

Breach of GDPR – part 1

Mr Thorp’s GP surgery had incorrectly recorded his height as 6.2cm rather than 6ft 2ins (188cm). Combined with his weight, this had given Thorp a BMI of 28,000 – roughly 1,000 times higher than the UK average – which would have made him morbidly obese.

Mr Thorp’s personal data was inaccurate.

The GDPR states that personal data shall be “accurate and, where necessary, kept up to date…”

The GDPR also states that “…every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay.”

The surgery appears to be in clear breach of the accuracy principle.

In such cases, it’s easy to put it down to “human error.” Someone simply recorded the height inaccurately.

But this is not sufficient. Data protection law requires us to consider: what measures was the surgery, Data Controller, taking to ensure its handling of personal data was adhering to the accuracy principle? What reasonable steps was it taking to identify inaccurate data and rectify it without delay? Does it need to review its processes, procedures, and operational systems to prevent this sort of mistake from happening again?

 

How to improve data accuracy

What technical measures could their systems have had in place to help ensure staff recorded accurate data?

  • g. a “minimum and maximum limit” on the height field – to alert the staff if a figure was recorded that was outside agreed parameters, and ask them to confirm if the “outlier” figure was accurate or to amend it.
  • g. a “minimum and maximum limit” on the BMI field – to alert staff to BMIs outside of agreed parameters (this also poses the question of whether the BMI figure was created for the purpose of allocating Covid jabs, or was already on record. If it was already on record, why had it not been recognised and acted upon before?).

What organisational measures could have been in place to help ensure staff record accurate data?

  • g. Staff training on the importance of recording and using accurate personal data. This can be especially important when it comes to giving staff the confidence to accurately record “free text” narrative notes and information about people.
  • g. Annual audits of data accuracy. Such an audit might have flagged the anomaly of a 6.2cm height in the system.

Breach of GDPR – part 2

There is also the possibility that the surgery breached the “Data Protection by Design” requirements of the GDPR.

These state that organisations should “…implement appropriate technical and organisational measures…which are designed to implement data protection principles… in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of [the GDPR and] protect the rights of [people].”

Essentially: If the surgery had brought in their system after May 2018, the technical measures should have been considered, and could have formed part of the specification.

Key takeaways

  1. Conduct an audit of data accuracy.
  2. Review and/or deliver staff training on data accuracy.
  3. Ensure the specifics for your next IT system consider data protection by design – and include technical measures to help ensure accurate personal data is recorded.

The BA Fine – How British Airways security flaws let data theft unfold

How will the BA Fine affect IT security? After over a year of delay, the Information Commissioner’s Office (ICO) finally issued their much-anticipated Penalty Notice against British Airways on 16th October 2020.

There have been headlines and debate around the size of the penalty. But the real focus should be on what the Notice tells us about the ICO’s approach to assessing whether an organisation is following the GDPR with regards to information security.

“When organisations take poor decisions around people’s personal data, that can have a real impact on people’s lives. The law now gives us the tools to encourage businesses to make better decisions about data, including investing in up-to-date security.” ICO press release.

The fine is a significant opportunity to engage with the business afresh

The BA case provides the first detailed view of how the ICO will approach looking at IT security and the GDPR.

At its core, the case highlights that IT departments will likely struggle to demonstrate how they are ensuring the organisation is complying with the GDPR’s security requirements without input from the business and data protection expertise.

This is because the GDPR’s approach to security requires a number of different considerations, only some of which are within the typical IT department’s skill set.

IT should be able to outline:

  • The threats and risks faced by the systems and processes it overseas.
  • The technical options available to mitigate those threats and risks, highlighting whether they are industry standards.
  • The costs of implementing those options.

Yet it is important to recognise that it is for other colleagues in other departments to

  • Assess the value and sensitivity of data and its importance to people if it were mishandled (i.e. to assess the risk to the rights of data subjects).
  • Consider the nature, scope, context and purposes of processing and how this should be reflected in the resources the organisation is willing to commit to this area of risk.
  • Consider the operational, commercial, regulatory and ethical risks the organisation faces when handling data.
  • Consider all these factors, alongside the input from IT, in order to agree what it considers to be the appropriate technical and organisational measures that will deliver an appropriate level of security.

The organisation can then instruct IT to implement these measures. IT is then responsible for deploying the measures and maintaining the operational functionality of systems.

After the BA case, IT can never again be just about IT.

 

 

A strong relationship between IT and Data Protection experts is key

The BA case highlights the importance of IT and data protection experts working together in order to ensure their different but related skills and knowledge are aligned for the benefit of the organisation.

Working together can ensure consistent assessments of risk so that the available options, their benefits, costs and risks can be presented to decision-makers and a risk-based decision made.

As well as being important for the selection of appropriate security measures, a strong working relationship is also an important factor in ensuring IT projects do not result in systems that fail to comply.

If Data Protection and IT experts work together from the outset of new projects, adding their expertise alongside input from the business into Data Protection Impact Assessments; consider how the system can be built on the principle of Data Protection by Design and Default, and articulate the impacts (in cost, time, functionality) and benefits of potential solutions, the project can deliver business outcomes and ensure GDPR obligations are met.

Systems need to be properly managed and resourced 

The BA case highlighted the importance of managing all aspects of your systems and in particular the need to support this with appropriate paperwork that matches reality.

At various points, BA tried to defend its approach to different aspects of information security. It was often unable to provide evidence of its decision-making.

For example, BA tried to say that it was following guidance when not requiring Multi-Factor Authentication (MFA) for certain remote network access. But when pressed for evidence of the decision making behind this approach, the ICO flagged that:

“Given…that no copy [of the risk assessment] can now be located, it is not possible to say that BA took into consideration the risk, the state of the art, the cost, or the available technical measures when deciding what security was appropriate.” (6.26, p34).

Similarly, when the ICO ask why other measures had not been considered or implemented, BA’s arguments failed to convince:

“BA argued that it was untenable to suggest that whitelisting was an alternative in practice…However: (i) there is no evidence that BA considered what alternative measures could be put in place as an alternative to MFA…and (ii) even if BA is correct that this solution would not have proven viable, it does not obviate the need to consider appropriate measures or explain why other appropriate measures were not in place” (6.27b, p35)

You, therefore, need the paperwork as evidence and to support your decision-making, especially if the decision is to accept a degree of risk.

In the BA case, decisions made by IT, whether deliberately or not, accepted a lot of risk on behalf of the business. This was often for the sake of pragmatism.

For example, BA tried to argue that the storage of credentials in plain text within scripts saved time, was standard practice and/or an acceptable way of ‘aiding functionality.’  The ICO notes that:

“If this is why BA stored passwords in this way, that is not an acceptable reason…when considering the minimal time saving it  allows,  the high risk it poses, and the alternative methods…available to BA as an organisation.” (6.74, p49).

Maintaining a risk log can help as it provides a place to document risks, their possible impact, and the solutions available. The decision whether to select a particular solution or accept a degree of risk can be presented to decision-makers, escalated where necessary and kept under review.

The ICO also noted how often BA’s policies and statements did not reflect the reality of what they found happening in practice.

For example, at a policy level BA said Multi-Factor Authentication (MFA) would be used for all remote network access. When challenged to explain why 13 of 243 application were not actually protected by MFA, the ICO concluded that:

“BA has not provided a satisfactory explanation as to why…it was deemed unnecessary for certain applications…to comply with the policy requiring MFA.” (6.21, p32-33).

Paperwork must therefore reflect reality. The GDPR’s accountability principle requires organisations to maintain evidence of their compliance and the effectiveness of their measures. Decisions around IT security should therefore be a reliable, true and honest reflection of your approach, and/or decisions to change, amend or deviate from agreed policies and procedures.

And finally, it is worth noting that the ICO highlighted to BA that:

“None of the…measures [the ICO had highlighted] would have entailed excessive cost or technical barriers. They are all readily available measures available through the Microsoft Operating System used by BA” (6.72, p48)

 

Senior management will be more likely to listen

 

For too long, IT Departments have been asked to achieve more with less; issues have been seen as something solely for “IT to sort.” And the business has left many decisions about risk and what is appropriate to IT when, as outlined above, they should not have been.

And for too long, IT Departments have often struggled to bridge the gap between heavy, technical detail and explaining why it needs resource to support the business in achieving its objectives. They have struggled to find language that the business, particularly senior decision-makers, can understand, recognise, and act upon by providing sufficient resource or accepting risks on behalf of the business.

The BA case demonstrates the importance of finding a way to translate the IT threats and risks an organisation faces into the practical, possible impacts that they will have on the people whose personal data is being processed. This should in turn enable the business to understand the risks alongside the costs of different technical measures available to them and place these against the other risks it faces.

The ICO noted that BA:

“has indicated that expenditure on IT security will not be reduced as a result of the impact of Covid-19.” (7.46, p73)

It is important to recognise that BA is now, after the event and with the ICO watching, are having to use unplanned time and resource to address security issues as well as make a public commitment to future spend in this area.

Organisations should not get into this position. The ICO is frequently at pains to highlight that the law requires appropriate measures to be in place depending on the circumstances of each organisation.

“The Commissioner does not find that simply because an attack took place BA was in breach of its obligations under the GDPR. Instead, the Attack which did occur exposed the fact that BA had failed to secure its systems in an appropriate manner.” (6.111, p59)

The ICO is interested in “whether a particular data controller has taken appropriate steps by reference to the data it is processing.” (6.106, p58)

 

Learn from what happened and recognise the areas that relate to your current approach to security

Context

The ICO’s report lists considerable failings on BA’s part. The timeline below represents a portion of the high-level events and things to learn.

Timeline

22/6/18

An attacker obtained five sets of credentials to the company Citrix portal. The supplier, Swissport, operated at an airport in Trinidad and Tobago. No mention was made about how the credentials were obtained, although four methods are possible (1) Purchasing them (2) A camera recording logins (3) Social Engineering (4) Use of a keylogger.

User training would have prevented social engineering and endpoint protection (although this was unlikely to be an appropriate solution in this case given that it was third party hardware) could have been configured in such a way as to prevent keyloggers. None of the four methods would have been successful if multifactor authentication had been in place.

On gaining access to the portal, the attacker discovered that it had not been hardened as per Citrix’s own published standards. The attacker was, therefore, able to breakout by running a program that was either not intended for users or was uploaded. The details of this are not published, however, it is implied that this program was possibly IT admin related. It is also implied that the users had more access than they required.

On breaking out of the portal, a plain text file was found on the network containing system admin passwords. There is a suggestion that this was a code for applications.

  • Use of application blocklists and allow lists would have been appropriate in Citrix. Alerting should have been configured to report attempted access to unauthorised applications.
  • A process for system implementation and risk assessment should have been in place to ensure all systems were configured to agreed standards.
  • Additionally, the least privilege principle should have been used on the Microsoft user accounts.
  • Penetration testing should have been undertaken.
  • A password management or Privilege Access Management (PAM) tool should have been in place.

23/6/18

The System admin credentials from the plain text file were used. They were out-dated and were unsuccessful.

A system monitoring alert could have been set up regarding the failure of SA logons. It is implied that a suitable system was in place, however it was not configured appropriately. The attack could have been stopped here.

25/6/18

The attacker successfully logged into 3 servers using local accounts. No details on how are published thereby implying a security exploit and possible patch management issue. By adding the guest account to the local administrator group they were able to use that to gain local admin capability.

Patch management and system monitoring for Event 4732 (inclusion of an account into the local admin group) would have interrupted this attack vector.

26/6/18

The attacker located DBA credentials.

26/6/18

The attacker located log files containing payment card details in plain text. BA stated that this was in there for testing purposes but was not removed when promoted to live. This led to a breach of 108,000 records.

Intelligent network security applications or hardware would have blocked and identified the movement of these records.

The attacker was able to scan BA’s system thoroughly without alerting any monitors and alter some JavaScript code in the Modernizr application which was last updated by BA in 2012, to redirect payment card details to a server controlled by the attacker. It is implied the code was changed in a Development environment and the attackers waited for it to be promoted to Live as part of the normal release cycle.

Manual reviews of such high-volume transaction codes should have been undertaken. There was no alert to report on code changes.

14/8/18 – 5/9/18

The above actions led to all payments (including card and CVV data) on BA’s website being copied and redirected to ‘BAways.com’, a domain controlled by the attacker.

5/9/18

A 3rd party identified payment data being copied to an external domain and notified BA.

Conclusion

The security of the infrastructure was either not considered or thought of as a sufficient threat to address even though it suggests third parties had BA domain accounts. There may be several reasons for this. Certainly, this suggests a lack of knowledge of staff in terms of security configuration of Citrix. It also implies that IT management did not make that a priority of the admin staff. It further strongly implies that Senior Management was carrying risks that they were unaware of or disinterested in.

As BA had the resources to address these issues, the fundamental failure here was likely to be one of communication. No IT team would intentionally introduce the security issues mentioned above. Senior Management should have provided appropriate resources and guidance as well as insisted on appropriate risk management reporting.

Where Protecture can help

Protecture does not view Data Protection (DP), Information Security (IS), IT and Leadership-led risk management as being separate entities. It is not possible for each of those individual specialities to be successfully implemented without the other three. We understand the overlap, that each need to communicate and each speak different languages. We can help you break down internal barriers, unlock budget by highlighting the risks and implement the solutions you require.

Whilst our individual services focus on IS, DP, IT and Risk, they all delivered by people that understand the interaction of them. Through partnerships with best in class manufacturers, software creators and specialists, we are able to address all of the issues listed above in a way that suits your business requirements.

We can help you Manage Data Fearlessly.

The BA Fine – The boardroom’s responsibilities: what are your business risks?

How will the BA fine affect your business? After over a year of delay, the Information Commissioner’s Office (ICO) finally issued their much-anticipated Penalty Notice against British Airways on 16th October 2020.

There have been headlines and debate around the size of the penalty. But the real focus should be on what the Notice tells us about the ICO’s approach to assessing whether an organisation is following the GDPR with regards to information security.

“When organisations take poor decisions around people’s personal data, that can have a real impact on people’s lives. The law now gives us the tools to encourage businesses to make better decisions about data, including investing in up-to-date security.” ICO press release.

Can your organisation demonstrate how personal data is managed?

 

When a breach happens, the ICO will want to see the evidence:

  1. That you have invested an appropriate amount of time and resource in information security.
  2. What that investment was on, i.e. what security measures you had in place, both the technical measures and your policies and procedures.
  3. Whether the measures were working – i.e. whether the reality of what was happening at the time of the breach matched what you said you would be doing to protect the data.
  4. What you considered when deciding to use the particular security measures – i.e. the decision-making process behind why you considered them appropriate for your organisation.

Organisations have been accountable for data protection since 1984. Yet the introduction of the GDPR was the first time there was an outright “accountability” principle. It made clear every organisation

“shall be responsible for, and be able to demonstrate compliance with [the data protection principles]” Article 5(2)

Last year the ICO flagged the importance of accountability:

“…the crucial, crucial change the law brought was around accountability.

Accountability encapsulates everything the GDPR is about.

It enshrines in law an onus…to understand the risks that they create for others with their data processing, and to mitigate those risks.

I don’t see that change in practice yet. I don’t see it in the breaches reported to the ICO….in the cases we investigate…”

Data Protection Practitioners’ Conference, April 2019 [1]

Accountability was at the heart of the BA case. From the information available, it would appear that BA found it difficult to provide evidence of their decision making, at several points during the investigation.

When asked to provide up-to-date risk-assessments, they couldn’t locate them.

When trying to defend why certain measures were not in place, they were not able to simply point to the records outlining how they reached their decisions. This meant they couldn’t demonstrate they had considered alternatives or why they had discounted particular measures.

And when trying to explain themselves after the event, the ICO found their arguments unconvincing.

It could have been different. Imagine if BA had been able to say “Here are the logs and monitoring data that show our security was working as planned. And here’s our decision-making process for each aspect of our information security. Each decision is based on the nature, scope, context and purposes of processing and the risk to the rights of people, the cost, and the current industry standards…and we, therefore, believe they are appropriate.”

 

Does risk assessment underpin how you make IT decisions?

 

The ICO noted that BA:

“implemented a number of remedial technical measures so as to reduce the risk of a similar Attack in future, and has indicated that expenditure on IT security will not be reduced as a result of the impact of Covid-19.” (7.46, p73)

It is important to recognise that BA is now, after the event and with the ICO watching, having to use unplanned time and resource to address security issues as well as make a public commitment to future spend in this area.

Organisations should not get into this position. The ICO is frequently at pains to highlight that the law requires appropriate measures to be in place depending on the circumstances of each organisation:

“The Commissioner does not find that simply because an attack took place BA was in breach of its obligations under the GDPR. Instead, the Attack which did occur exposed the fact that BA had failed to secure its systems in an appropriate manner.” (6.111, p59)

The ICO is interested in “whether a particular data controller has taken appropriate steps by reference to the data it is processing.” (6.106, p58)

It is not the ICO’s role to “investigate and establish the extent of any damage that may have been caused to any particular data subject.” (7.45, p73)

Here, both the challenges and the advantages of data protection’s principles-based approach can be seen.

It is helpful that the law recognises one size fits none; that organisations need the flexibility to select security measures that fit and work for them given their size, budget, the sensitivity and volume of data they handle and their appetite for risk.

Yet this flexibility comes with responsibility. You need to have a documented, consistent, robust method for assessing what is appropriate and deciding which measures to adopt.

This is where your data protection and IT experts need to work together in harmony in order to assess risk and present options to you. They need to consider the nature, scope, context and purposes of processing and the risk to the rights of data subjects. They need to outline what the current industry standard solutions are, given the current threats and risks, and the costs of implementing the different options.

If this happens, you should be able to plan your IT spending and approach, based on an assessment of risk. IT should implement the agreed changes to the agreed plan, alongside ensuring the organisation is using the existing measures on a daily basis to manage data.

It is also worth considering, on top of the fine and the remedial IT measures, the other costs BA would also have incurred as a result of the breach:

  • External forensic consultants and legal advisers (BA tried to argue that it was appropriate to reduce the penalty by reference to these costs. The ICO did not) (7.51d, p75)
  • Making free credit monitoring available, which over 40,000 data subjects took up (7.12d, p64)
  • BA offered to reimburse all customers who had suffered financial losses as a direct result of the theft of their card details (7.44, pp72-73).

Such costs are often overlooked or underplayed but should form part of any assessment of the cost/benefit analysis of any security options.

 

 

Do IT and Data Protection meet regularly, and do they both present regularly to Senior Management?

 

The BA case demonstrates a long-held area of risk for many organisations: whether IT and data protection experts work well together.

This is critical. IT should not be expected to make decisions on the value and significance of personal data. That is a decision for the organisation.

IT should be expected to provide their expertise about the threats and risks the organisation faces, the technical options available to mitigate those risks, and the costs of those options. It would also be IT’s responsibility to deploy the solutions and maintain the operational functionality of the systems.

Data Protection should not be expected to know the specific technical details of the IT threats the organisation faces, or how to address them, however, their input should always be sought when assessing the risks to data subjects and the organisation, which may arise from the processing of personal data

Both teams should be expected to work with the rest of the organisation to assess and define what value and significance it wishes to allocate to the data being processed. They should be able to work with the organisation to assess the operational, commercial, regulatory and ethical risks the organisation faces when handling data.

If they work well together, senior management should be presented with options about which security measure is considered appropriate and why in order to enable them to make risk-based decisions.

 

 

Did your GDPR project cover information security and did it ever finish?

 

The ICO noted that:

“…the advent of the GDPR should have prompted a careful review of BA’s systems and security arrangements” (7.23, p67)

BA highlighted its “extensive commitment to information security” and the ICO accepted BA had put in place a programme to prepare its systems for the introduction of the GDPR.

But the ICO concluded that BA could only demonstrate a commitment to certain aspects of information security because the programme had failed to identify and address the deficiencies in BA’s security that were highlighted by the attack.

The ICO concluded that BA was negligent in failing to ensure that it had taken all appropriate measures to secure personal data. (7.21 and 7.23, pp66-67).

“In view of these factors, the Commissioner would expect BA to  have taken appropriate steps or a combination of appropriate steps to secure the personal  data  of its  customers; and considers that BA was negligent…in failing to do so.” (7.20, p66).

 

 

Avoid sounding like you do not value peoples’ personal data

 

It is understandable that BA’s lawyers would try everything to defend their client, avoid a fine or otherwise reduce the level of the fine.

However, this can result in the organisation’s views on its customers and their personal data becoming public knowledge. This risks giving the public the impression that the organisation is downplaying, or insufficiently concerned about the impact of the breach on their customers.

In this case, the ICO states that it thought it likely that many of the 429,612 individuals affected by the breach will, depending on their circumstances,

“have suffered anxiety and distress as a result of the disclosure of their personal information (including payment card information) to an unknown individual or individuals.” (7.12, pp62-63)

BA countered this position, resulting in them putting on public record, via the Notice, their belief that

  • It is “inherently unlikely’ that consumers will be distressed by learning their payment card data have been compromised.
  • Payment card breaches such as this one are “an entirely commonplace phenomenon” and therefore an “unavoidable fact of life”
  • The breach was not that serious because hundreds of thousands of customers were affected, rather than millions as in other breaches
  • The action taken to mitigate the impact of the attack would have immediately addressed all concerns on the part of its customer about their data being in the hands of criminals and/or otherwise outside of BA’s control (7.45, p73)

The ICO did not accept these points. The ICO also did not comment on:

“BA’s assertions that “claimant law firms will, for entirely self-serving purposes, use the word “distress” very liberally, essentially with the aim of garnering thousands of potential claimants on no­win-no-fee agreement…” The Commissioner applies that term in accordance with the legislation, when the  circumstances under consideration warrant it.” (7.12c, p64)

The reputational impact of making such statements public will be difficult to quantify, but need to be considered by all organisations.

[1] https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2019/04/data-protection-practitioners-conference-2019/

The BA Fine – The Data Protection Landscape is Changing

How will the BA Fine affect the data protection landscape? After over a year of delay, the Information Commissioner’s Office (ICO) finally issued their much-anticipated Penalty Notice against British Airways on 16th October 2020.

There have been headlines and debate around the size of the penalty. But the real focus should be on what the Notice tells us about the ICO’s approach to assessing whether an organisation is following the GDPR with regards to information security.

“When organisations take poor decisions around people’s personal data, that can have a real impact on people’s lives. The law now gives us the tools to encourage businesses to make better decisions about data, including investing in up-to-date security.” – ICO press release.

The importance of assessing risk

The ICO’s messaging to Data Controllers frequently highlights that the law requires ‘appropriate’ measures:

“Not every instance of unauthorised processing or breach of security will amount to a breach…The obligation under Article 5…is to ensure appropriate security; the obligation under Article 32 is to implement appropriate technical and organisational measures to ensure an appropriate level of security…” (6.5, p29)

The challenge the Controller faces is to have in place a process for assessing what is “appropriate,” especially when it comes to IT security when there are many methods and means to protect data, each with different costs and impacts upon the organisation.

The ICO noted that Articles 5 and 32 outline what is required. First, they require you to look at the organisation and the data:

“the nature, scope, context and purposes of processing…” (6.5, p29)

In the BA breach case, the ICO considered the size and profile of the organisation and the nature of its business. The ICO concluded BA should have recognised that the delivery of its services required it to process large volumes of personal data and that it was likely to be targeted by attackers.

All organisations, therefore, need to consider how their operational model (locations; service delivery model) the sector they are in (e.g. healthcare; education; charity) and their own history (rapid expansion; changes in approach) bring heightened expectations and/or risks.

Next, the Articles ask us to bring people into our assessment. We need to consider

“…the risk to the rights of data subjects.” (6.5, p29)

The GDPR outlines, in recital 75, examples of potential risks to individuals. These include: being prevented from exercising control over their personal data to identify theft or fraud, damage to reputation through to physical harm, financial losses. The ICO will look at the degree of damage or harm (which may include distress and/or embarrassment) when identifying the circumstances in which they consider it will be appropriate to issue a Penalty Notice. (2.37, p16)

The ICO concluded that BA should have recognised the risk that a breach may have had significant consequences for its customers.

How did BA not recognise this? This points to our next lesson.

The importance of valuing all personal data

No special category personal data was affected by the breach.

But BA’s attempt to argue that that the ICO had “severely overstated” the sensitivity of the data affected was knocked back by the ICO; they consider that:

“the loss of control by BA of personal data such as names, addresses and unencrypted payment card data to be particularly serious, allowing as they do the opportunity for identity theft.” (7.32, p69)

This highlights that assessing the value and sensitivity of data, and its importance to people if it were mishandled, is never as simple as having just three categories (e.g. not personal, personal data or special category personal data).

In this case, the ICO noted that when financial data, especially full financial data, is disclosed (i.e. the card details and the CVV) and where there is a high volume of data disclosed, it gave that data a greater value and significance.

And we have seen the ICO take action regarding Easyjet in July, asking them to make their loss of flight details public because such details could be used to produce convincing phishing messages during a time when more people are likely to react to emails about refunds (due to the impact of Covid).

So there is a need for a more nuanced assessment with the individual’s rights and welfare at the heart of it. For example, consideration should be given to:

  1. The category of people whose data is being processed.
  2. The volume of records and volume of data per record.
  3. The data “stickiness” (how difficult/inconvenient it is for someone to change the data if it is exposed/misused?)
  4. The data “criticality” (how much does the individual and the people they interact with rely on the data being accurate and available, and what is the degree of effect if it is not?)
  5. The data “reach” (how far outside of their direct control will the data travel for the processing?)

Adopting a “person-centred” approach to assessing the value and sensitivity of personal data will help you assess the potential risk to the rights of data subjects. Next we need to look at the final aspect for assessing what is appropriate: technology and cost.

 

A strong relationship between Data Protection and IT experts is key

The GDPR recognises that one size fits no one. Article 32 enables a cost-benefit analysis when deciding which technical and organizational measures are appropriate for your organisation. You can consider:

  • The “state of the art” – i.e. what the current industry standard solutions given the current threats and risks.
  • The ”costs of implementation” – i.e. you can consider the costs of implementing a certain approach.

The BA case highlights the importance of data protection experts working with IT experts in order that their different but related skills and knowledge can work together here.

  • Data protection experts are responsible for prompting the organisation to define the value and significance of the different data being processed. As we saw in lesson one, this should be based on an assessment of the nature, scope, context and purposes of the processing and risks to individuals. It should also consider the operational, commercial, regulatory and ethical risks the organisation faces when handling data.
  • IT should provide expertise about the threats and risks the organisation faces, the technical options available to mitigate those risks, and the costs of implementing those options.

Working together can ensure consistent assessments of risk so that the available options, their benefits, costs and risks can be presented to decision-maker and a risk-based decision made.

As well as being important for the selection of appropriate security measures, a strong working relationship is also an important factor in ensuring IT projects do not result in systems that fail to comply.

BA was found to be accidentally logging payment card details (including, in many cases, CVV numbers, the majority of which were unencrypted, in plain text). BA stated that this was done for testing purposes; it was only intended to be operating when the system was not live but was left active by mistake. The attacker accessed 108,000 records that BA should not have been recording in the first place.

The logging and storing of these card details…was not an intended design feature of BA’s systems and was not required for any particular business purpose…This error meant that the system had been unnecessarily logging payment card details since December 2015. (3.22,p22)

The fact that BA did not identify that the credit card logging feature remained active after its system went live in 2015…demonstrates a failure to adopt appropriate technical and organisational measures…and compliance with the data protection principles, including data minimisation. (6.89-6.91, pp53-54)

If Data Protection and IT experts work together from the outset of new projects, adding their expertise alongside input from the business into Data Protection Impact Assessments; consider how the system can be built on the principle of Data Protection by Design and Default, and articulate the impacts (in cost, time, functionality) and benefits of potential solutions, the project can deliver business outcomes and ensure GDPR obligations are met.

In this case, the ICO noted that:

“None of the…measures [the ICO had highlighted] would have entailed excessive cost or technical barriers. They are all readily available measures available through the Microsoft Operating System used by BA” (6.72, p48)

Once this approach is up and running, there is a need to consider the next lesson…

 

Document decision-making and ensure paperwork matches reality

When BA tried to defend its approach, it was often unable to provide evidence of its decision making.

For example, BA tried to say that it was following guidance when not requiring Multi-Factor Authentication (MFA) for certain remote network access. But when pressed for evidence of the decision making behind this approach, the ICO flagged that:

“Given…that no copy [of the risk assessment] can now be located, it is not possible to say that BA took into consideration the risk, the state of the art, the cost, or the available technical measures when deciding what security was appropriate.” (6.26, p34).

Similarly, when the ICO ask why other measures had not been considered or implemented, BA’s arguments failed to convince:

“BA argued that it was untenable to suggest that whitelisting was an alternative in practice…However: (i) there is no evidence that BA considered what alternative measures could be put in place as an alternative to MFA…and (ii) even if BA is correct that this solution would not have proven viable, it does not obviate the need to consider appropriate measures or explain why other appropriate measures were not in place” (6.27b, p35)

You, therefore, need the paperwork to evidence and support your decision-making, especially if the decision is to accept a degree of risk.

The ICO also noted how often BA’s policies and statements did not reflect the reality of what they found happening in practice.

At a policy level, a BA policy said Multi-Factor Authentication (MFA) would be used for all remote network access. When challenged to explain why 13 of 243 application were not actually protected by MFA, the ICO concluded that:

“BA has not provided a satisfactory explanation as to why…it was deemed unnecessary for certain applications…to comply with the policy requiring MFA.” (6.21, p32-33).

At a strategic level, BA highlighted its “extensive commitment to information security.”  The ICO accepted BA had demonstrated commitment to certain aspects of information security and had put in place a programme to prepare its systems for the introduction of the GDPR.

Yet the ICO had to conclude that the programme had failed to identify and address the deficiencies in BA’s security that were highlighted by the attack, so BA was negligent in failing to ensure that it had taken all appropriate measures to secure personal data. (7.21 and 7.23, pp66-67).

Paperwork must therefore reflect reality. The GDPR’s accountability principle requires organisations to maintain evidence of their compliance and the effectiveness of their measures. Decisions around IT security should therefore be reliable, true and honest reflection of your approach, and/or decisions to change, amend or deviate from agreed policies and procedures.

 

Timely breach reporting procedures are key

The need to report all breaches to the ICO, unless you consider that the breach is unlikely to result in a risk to peoples’ rights and freedoms was a major change in data protection law.

BA did act swiftly in this regard. Within two hours they had made changes to stop the breach. Within a day they had notified the ICO, acquirer banks and payment schemes, and 496,636 affected customers about the incident. (3.26-3.27, pp23-24)

The ICO took into account how swiftly they acted, and that BA also issued as a press release to 5,000 journalists and commentators, and was active on television, social media and in the press about the attack (7.42, p72).

Having a clear process by which staff can report actual or suspected breaches, which is well publicised via training and awareness materials, has never been more important.

However, the issue, in this case, was not the response to being made aware of the breach, but who made BA aware of the breach. It took a third party to inform BA that data was being sent from britishairways.com to BAways.com.

“The failures are especially serious in circumstances where it is unclear whether or when BA itself would ever have detected the breach. BA was only alerted to the [redirection] of personal data from its website [to the Attacker’s site] by a third-party. In the absence of that notification, the number of affected data subjects and any financial harm to them could have been even more significant” (7.10, p62)

The decision of whether to deploy software and tools to monitor and detect unusual or unauthorised activity within your systems will come down to the assessment of the risks and whether such measures are appropriate for your organisation.

Unpicking Article 22

Few people had heard of Article 22 of the GDPR before this summer’s automated grade-prediction story broke in the media. This less-known snippet of the GDPR is highly significant for our everyday lives. So many workflows are automated, making the entitlement to challenge automated decisions one of the most important data subject rights that the GDPR bestows.

Automated decision-making presents a high risk to the rights and freedoms of individuals for a number of reasons. For example, it works at a volume and speed that are beyond human capacity to keep track of… let alone understand.

For a Controller who is accountable for the effects produced by their automated decision-making, familiarity with Article 22 and how it interacts with the rest of the GDPR, is essential to managing data protection risk.

So what does it say?

Article 22

Automated individual decision-making, including profiling

1.The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

This means that there is a general prohibition on leaving decisions that significantly affect individuals entirely up to computers. In other words, humans must be in the loop at some point.

‘Legal effects’ means ‘having an effect the data subject’s legal rights’ – the obvious one that might spring to mind is the right not to be unfairly discriminated against, but there are many rights which could be affected. Even if no legal rights are directly affected, if the automated decision produces a significant impact in other ways; on an individual’s behaviour or choices, for example; then the processing is forbidden by default.

However, it’s not always forbidden – there are narrow circumstances in which it can be used….

2. Paragraph 1 shall not apply if the decision:
(a) Is necessary for entering into, or performance of, a contract between the data subject and a data controller;
(b) Is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
(c) Is based on the data subject’s explicit consent.

This means that solely-automated processing can only be carried out on the basis of contract, legal duty or consent. Never legitimate interests, public interest or vital interests. When automated processing is employed to negotiate or fulfil the terms of a contract, it must be necessary to do so – not just ‘easier’ or ‘cheaper’.

Any Controller relying on legal obligation must be prepared to cite the specific legal duty that requires automated processing to take place and justify why it is necessary. For example, the use of fraud-monitoring systems to block suspicious financial transactions may be justified by section 18 of the Money Laundering Regulations, on the basis that analysing transaction patterns in real-time is beyond the capability of most human beings and therefore automated judgement is appropriate.

Consent, of course, must be:

  • Informed – the logic of the processing must be described, the risks highlighted and the data subject’s rights explained,
  • Freely-given – there must be a genuine choice, and no detriment arising from refusal or withdrawal,
  • Explicit – consent for the automated processing to take place must be presented separately to terms and conditions, cookie consent, marketing consent or any other aspect of the interaction,
  • Unambiguous – a positive and specific response by the data subject to the question of consent for automated processing

… and all of this must be in place before the automated processing beings.

3.In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

It’s highly likely that automated processing will have an impact on the rights, freedoms or interests of the data subject – and therefore a DPIA should be carried out (before the processing begins). ‘Suitable measures’ will depend on the nature (scope, context, etc) of the processing, so there’s no definitive checklist that can be applied in all scenarios. As this paragraph makes clear, there must be processes and channels for a data subject to:

a) identify that a decision has been made by automated means

b) challenge that decision

c) have a human being review the decision

although these steps may not be enough on their own to provide ‘suitable’ measures for protecting data subjects.

4.Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.

Automated decision-making presents an even greater threat to individuals’ rights and freedoms when special category personal data is involved, so it is restricted to when it is necessary in the substantial public interest, or with explicit consent, only.

(Substantial public interest conditions are outlined in Schedule 1, Part 2 of the UK Data Protection Act). A DPIA is definitely required if the automated processing will involve special category personal data.

Applying Article 22

a) Identify activities which will involve automated decision-making, map the data flows and the logic used in the decision-making process.

If you are looking at a product or service which involves automated decision-making, you will need to engage with the vendor to get this information. The ideal time to do this is before you buy anything.

b) Assess the ‘what-ifs’ – could this processing somehow produce ‘legal effects’, long-term consequences, or disruption to the data subject’s normal life?

If not; you can go ahead with the processing, as long as you ensure that the Principles, data subject’s rights and Controller obligations are met for all of the processing

c) Make sure the purposes, lawful bases and justifications of necessity are all established for the proposed automated decision-making

In the news: automated decision-making

Over the last week, automated decision-making has been in the news, after Ofqual (the Government agency regulating qualifications and exams for England) used an algorithm based on statistical modelling to issue A-level grades for students, who have been unable to sit exams due to Covid-19. Around 40% of assigned grades turned out to be much lower than teachers’ assessments, resulting in serious impacts to young people’s University and employment prospects.

At the heart of objections to Ofqual’s approach to grade assignment were significant data protection concerns about fairness, transparency and accuracy. After a public outcry, assigned grades based on Ofqual’s statistical model were withdrawn and replaced with teacher-assessed grades.

By now, GDPR requirements for fairness, accuracy and transparency are fairly commonly-known, but this particular set of events also put other aspects of data protection law onto the public’s radar – those relating to automated processing.

‘Automated processing’ is addressed in Recital (71) and (75), while Article 22 outlines specific data subject rights when decisions affecting them are made based on automated processing of their personal data.

What does ‘automated decision-making’ mean?
‘Automated decision-making’ means, essentially, using computers to make decisions about people, based on processing of their personal data. Article 22 applies to decisions that are produced through solely automated means, where processing carried out on personal data without any human intervention or influence while the decision-making takes place. This type of processing by its nature represents a higher risk to the rights and freedoms of the data subject.

When is ‘automated processing decision-making’ allowed?
Because of the risk to individual rights and freedoms, if solely automated decision-making has ‘legal effects’ or ‘similar’ significance, then it is generally prohibited unless certain conditions apply. Those conditions are:

22.2.a: where the processing is necessary for a contract between Controller and data subject.

  • This requires that the automation be critical to entering into or performing the terms of the contract – there must be a compelling reason to rely on automation without human input.

22.2.b: EU or Member State law applicable to the Controller authorises the processing and sets out suitable safeguards to protect data subjects’ rights, freedoms and legitimate interests;

Or
22.2.c: the data subject has given explicit consent for the processing.

  • Consent, of course, must be informed, freely-given, specific and unambiguous in order to be valid.

When special categories of personal data are concerned the only lawful bases that can be applied are explicit consent (9.2.a) or substantial public interest (9.2.g), and only where suitable safeguards for the rights, freedoms and interests of data subjects have been put in place.

Is automated processing the same as profiling?
No; ‘solely automated decision-making’ and ‘profiling’ aren’t quite the same thing, although they may often overlap. ‘Profiling’ is when an individual’s personal data is used to make judgements or predictions about their personal characteristics or behaviour, and it can include elements of human intervention in the processing. ‘Automated decision-making’ does not necessarily produce judgements or predictions, and has no element of manual, human involvement.

Don’t miss part 2; a deeper dive into how automated decision-making can be done fairly and lawfully…

Schrems II – now what?

So, you’ve read through the Schrems II FAQs and you know you need to do something…but perhaps you’re not quite sure exactly what that ‘something’ should look like. Luckily for you, we at Protecture have been giving this a lot of hard thought and we’ve produced this decision-making guide to help you work out your overall approach and next steps.

 

Does this affect my organisation (and if so; what should we do about it)?

  1. The first step is to work out whether your processing involves any transfer of personal data to the US (either by you, fellow Joint Controllers, your Processors or their sub-Processors). If you’ve already mapped your dataflows and populated your ROPA, this information should be ready and waiting for you – if not; then you should seriously consider moving this work up the priority list.
  2. Once you have a clear picture of your US dataflows, you should investigate whether the ‘importers’ (the US organisations receiving personal data that you’re the Controller for) may be affected by FISA-702

These will be:

  • Remote and cloud computing service providers,
  • Electronic communication service providers,
  • Telecommunications carriers,
  • Any other kind of communication service provider whose staff, agents or contractors have access to wire or electronic communications as they are transmitted or stored.

(NB: hosting data on servers located within the EU does not provide protection from FISA-702, if a US-owned- or US-based company is involved in the hosting or processing of the data. It may be possible for these companies to put practical restrictions in place to keep the data out of the US’s intelligence agencies’ hands, but you’ll need to ask for the specifics of how this is/can be achieved.)

  1. Identify whether you are relying on Privacy Shield or SCCs for (any of) these data transfers to the US. If so, something will need to be done, because these conditions are officially invalid and your processing is therefore unlawful. It’s time to look at your options…
    1. Is it possible to obtain informed, freely-given, specific and unambiguous consent from the data subject for the transfer of their data to the US? Can the data be retrieved and barred from further transfer if consent is withdrawn? If the answers are all ‘yes’, then you may be able to use consent as the condition for transfer. However, you should approach this with caution as consent is highly unlikely to be appropriate in most cases, and misuse or misapplication of consent is considered as serious as a security breach in the GDPR.
    2. Is there a contract between you, the Controller, and the data subject that cannot be fulfilled unless the data is transferred to the US? (Examples would include, booking a flight to the US, ordering goods to be shipped from the US, transferring personnel between the US and the EU parts of a global business)
      1. Does justification for the transfer stem from your organisation’s choice of a US provider, on the grounds of price, convenience or available features? If so, the necessity test is not met, and the contract won’t be a suitable condition for transfer.
    3. If your organisation is a public body, is transferring data to the US for the purposes of exercising or defending legal claims, or has a legally-binding international agreement, then you may be able to rely on these – but you need to consult experts in these areas, and may need legal advice.
  2. If you are unable to identify any suitable alternative to Privacy Shield or SCCs for your US data transfers, then these transfers amount to unlawful processing of personal data and will prevent you from being ‘compliant’ with the GDPR. However, you’re in the same boat as a lot of other organisations and probably not in immediate danger of getting into trouble – what you do about this will depend on the degree of risk your business is willing to accept.

 

Data protection risks of unlawful transfers

Caveat: these are generalisations that don’t factor in specifics of industry or circumstance

  • Regulatory risk: MODERATE.

Enforcement against unlawful transfers will be a long time coming in the UK, and is likely to be focused against large or high-profile organisations first. If your organisation is subject to regulation by EU Data Protection Authorities, then you may face an increased risk of enforcement, and civil action by data subjects remains a possibility. Unlawful transfers are likely to be an exacerbating factor in breaches or other non-compliance incidents.

  • Commercial risk: HIGH.

Customers and corporate supply chains will be seeking assurances that you won’t put their data at risk or expose them to liability for non-compliance, especially if you operate in highly-sensitive or regulated areas (healthcare, finance, politics, etc).

  • Operational risk: MODERATE.

If you don’t already have suitable data protection checks and procedures in place for procurement, new projects and programmes that involve the processing of personal data; you’re likely to be building up ‘compliance debt’ which will be more disruptive to fix, the longer it carries on.

  • Ethical risk: HIGH.

Considering the current political climate in the US, there are several categories of people for whom covert access to their data by US intelligence services could be viewed as a serious problem. Organisations which claim a strong ethical stance should think carefully about the potential impact to the rights and freedoms of their staff (including volunteers and contractors), customers or service users, and supporters; even if that impact is not visible or traceable.

 

You shouldn’t try to apply a one-size-fits-all approach to addressing the problem of unlawful transfers – you should look at the dataflows individually, and consider:

  • What purpose(s) the dataflow serves, and whether it is business-critical
  • The solidity of the lawful basis on which the dataflow is based – for example, an unlawful transfer will undermine the applicability of a legitimate interests basis to the processing.
  • The practical and technical feasibilities of making changes

Whatever actions you decide to take, you must document your decision-making process and be able to show evidence of your implementation.

 

When dealing with unlawful transfers, you have 3 options:

  1. Terminate – stop doing the thing that incurs the risk
  • Suspend processing activities which rely on unlawful transfers

This is only going to be realistic for non-critical dataflows for non-critical business operations, or where there is a suitable and safe alternative already in place ready to go. It’s the ‘nuclear option’ and will likely have significant operational and commercial impact.

  1. Treat – reduce the likelihood or the impact of the risks turning into issues

Examples of ways to treat this risk are:

  • Find an alternate provider in a destination country within the EU, or with a finding of adequacy.
  • Seek legally-binding assurances from the provider that in practical (‘factual’) terms, the personal data is protected against being accessed or obtained by any US-based supplier parent, subsidiary, department or sister entity.
  • Reduce the dataset which is being transferred to the bare business-critical minimum
  • Improve your organisation’s data protection overall, to reduce the chance of an incident that could turn the risks into issues.
  • Allow data subjects to opt-out of features or services that rely on transfer of their personal data to the US, provide the necessary tools and procedures for their data to be excluded.
  1. Tolerate – do nothing and hope the risk doesn’t turn into an issue

This is obviously the low-cost and low-effort course of action, and it may be appropriate for a limited time, but is unlikely to be sustainable in the long-term. While there may be more urgent or important things to think about right now, you should at least document a timescale, or triggers, for revisiting this decision.

 

Contact Protecture on 020 3691 5731 or log a ticket if you have any privacy shield questions or queries.