The Irish Data Protection Commission is to launch an investigation into a data leak in which the details of hundreds of millions of Facebook users were published online.
It had previously been looking into claims from Facebook that the data was old, from a previously reported leak in 2019.
But it now says that there could have been a breach of data laws.
Facebook said it was “co-operating fully”.
In a statement, the firm said the inquiry “relates to features that make it easier for people to find and connect with friends on our services”.
It added: “These features are common to many apps and we look forward to explaining them and the protections we have put in place.”
The Irish regulator is a key one in such investigations as Facebook’s European headquarters are in Dublin.
‘Cause for concern’
The dataset of 533 million people from 106 countries was published on a hacking forum earlier this month, although the data had initially been scraped from Facebook some years earlier.
The feature that was manipulated to access the data was changed in 2019 after Facebook became aware that it was being abused.
Although not every piece of data was available for every user, the large scale of the leak was a cause for concern among privacy experts.
In a statement, the DPC said it had looked at the evidence and was “of the opinion that one or more provisions of the GDPR and/or the Data Protection Act may have been, and/or are being, infringed in relation to Facebook users’ personal data”.
“Accordingly, the commission considers it appropriate to determine whether Facebook Ireland has complied with its obligations, as data controller, in connection with the processing of personal data of its users.”
Firms found to be in breach of GDPR face fines of up to 4% of their annual global turnover.
The use of facial recognition for surveillance, or algorithms that manipulate human behaviour, will be banned under proposed EU regulations on artificial intelligence.
The wide-ranging proposals, which were leaked ahead of their official publication, also promised tough new rules for what they deem high-risk AI.
That includes algorithms used by the police and in recruitment.
Experts said the rules were vague and contained loopholes.
The use of AI in the military is exempt, as are systems used by authorities in order to safeguard public security.
The suggested list of banned AI systems includes:
- those designed or used in a manner that manipulates human behaviour, opinions or decisions …causing a person to behave, form an opinion or take a decision to their detriment
- AI systems used for indiscriminate surveillance applied in a generalised manner
- AI systems used for social scoring
- those that exploit information or predictions and a person or group of persons in order to target their vulnerabilities
European policy analyst Daniel Leufer tweeted that the definitions were very open to interpretation.
“How do we determine what is to somebody’s detriment? And who assesses this?” he wrote.
For AI deemed to be high risk, member states would have to apply far more oversight, including the need to appoint assessment bodies to test, certify and inspect these systems.
And any companies that develop prohibited services, or fail to supply correct information about them, could face fines of up to 4% of their global revenue, similar to fines for GDPR breaches.
High-risk examples of AI include:
- systems which establish priority in the dispatching of emergency services
- systems determining access to or assigning people to educational institutes
- recruitment algorithms
- those that evaluate credit worthiness
- those for making individual risk assessments
- crime-predicting algorithms
Mr Leufer added that the proposals should “be expanded to include all public sector AI systems, regardless of their assigned risk level”.
“This is because people typically do not have a choice about whether or not to interact with an AI system in the public sector.”
As well as requiring that new AI systems have human oversight, the EC is also proposing that high risk AI systems have a so-called kill switch, which could either be a stop button or some other procedure to instantly turn the system off if needed.
“AI vendors will be extremely focussed on these proposals, as it will require a fundamental shift in how AI is designed,” said Herbert Swaniker, a lawyer at Clifford Chance.
Sloppy and dangerous
Meanwhile Michael Veale, a lecturer in digital rights and regulation at University College London, highlighted a clause that will force organisations to disclose when they are using deepfakes, a particularly controversial use of AI to create fake humans or to manipulate images and videos of real people.
He also told the BBC that the legislation was primarily “aimed at vendors and consultants selling – often nonsense- AI technology to schools, hospitals, police and employers”.
But he added that tech firms who used AI “to manipulate users” may also have to change their practices.
With this legislation, the EC has had to walk a difficult tightrope between ensuring AI is used for what it calls “a tool… with the ultimate aim of increasing human wellbeing”, and also ensuring it doesn’t stop EU countries competing with the US and China over technological innovations.
And it acknowledged that AI already informed many aspects of our lives.
The European Centre for Not-for-Profit Law, which had contributed to the European Commission’s White Paper on AI, told the BBC that there was “lots of vagueness and loopholes” in the proposed legislation.
“The EU’s approach to binary-defining high versus low risk is sloppy at best and dangerous at worst, as it lacks context and nuances needed for the complex AI ecosystem already existing today.
“First, the commission should consider risks of AI systems within a rights-based framework – as risks they pose to human rights, rule of law and democracy.
“Second, the commission should reject an oversimplified low-high risk structure and consider a tier-based approach on the levels of AI risk.”
The details could change again before the rules are officially unveiled next week. And it is unlikely to become law for several more years.
Neuralink, Elon Musk’s computer to brain interface firm, has released a video it claims shows a monkey playing the video game Pong with its mind.
Its brain signals were sent wirelessly via an implanted device.
The hope is that the interface could eventually allow people with neurological conditions to control phones or computers remotely.
One expert said the fact no wires were used represented “significant progress”, but more data was needed.
The macaque monkey, named Pager, was first taught to play the video game with a joystick, and was rewarded with a fruit smoothie.
During this process, the Neuralink device recorded the information about which neurons were firing to control which movements.
Then the joystick was disconnected, leaving the monkey to control gameplay with its mind only.
In a blog post the researchers wrote: “Our mission is to build a safe and effective clinical BMI (Brain Machine Interface) system that is wireless and fully implantable.
“Our first goal is to give people with paralysis their digital freedom back, to communicate more easily via text, follow their curiosity on the web, to express their creativity through photography and art, and, yes, to play video games.”
After that it said the system “could also potentially be used to restore physical mobility” by using the link to read signals in the brain which could be used to stimulate nerves and muscles in the body.
But the process would have to be refined.
“With the monkey, we calibrate the decoder by mapping neural activity patterns to actual joystick movements. However, we won’t be able to use such a strategy for people with paralysis,” it said.
Mr Musk’s response to the experiment was typically bold. He tweeted: “First Neuralink product will enable someone with paralysis to use a smartphone with their mind faster than someone using thumbs.”
He went on to say the next stage would be “enabling, for example, paraplegics to walk again”.
He has a long-term ambition to usher in an age of “superhuman cognition”, in part to combat what he sees as artificial intelligence so powerful it could destroy the human race.
Andrew Jackson, professor of neural interfaces at Newcastle University, said of the experiment: “Brain control of computer cursors by monkeys is not exactly new, and this demonstration extends a line of work that dates back at least to pioneering studies in the early 2000s.
“The control in the video looks impressive, but without seeing a proper publication on their data it is hard to say how it compares to the current state-of-the-art.”
But he added the fact that there were no cables coming through the skin of the monkey, and that brain signals sent wirelessly, was “definitely new and innovative”.
“This to me is the advance here, and is important both for improving the safety of human applications (wires through the skin are a potential route for infection) and also as a way of improving the welfare of animals used in neuroscience studies.
“The Neuralink team has definitely made significant progress in this regard.”
Neuralink has previously shown off a video of a pig called Gertrude with a chip in her brain, and a computer tracking her neural activity as she looked for food.
Neuralink co-founder Max Hodak recently tweeted about the possibility of using tech and engineering to create new species: “We could probably build Jurassic Park if we wanted to”, he said, adding it wouldn’t create “genetically authentic dinosaurs” and would require 15 years of breeding and engineering to get “super exotic novel species”.
Some experts though remained more concerned about the welfare of the monkeys and pigs Neuralink is using in its current experiments.
Dr Katy Taylor, director of science and regulatory affairs at Cruelty Free International, said: “It beggars belief that animals are being used in this type of grotesque curiosity-driven experiment.
“In fact, 57% of experiments in universities are now believed to be in the area of basic research, much of it driven by nothing more than curiosity and certainly not required by law.”
In its blog, Neuralink said all animals were treated well, with their care evaluated by vets.
“Pager lives with his best mate, Code. They enjoy swinging from their treehouse and napping in their hammocks after an engaging gaming session,” the firm said.
A “simple flaw” caused by a language difference led to a “serious incident” for a flight from Birmingham last year.
All female passengers whose title was “Miss” were classified as children – not adults – on the Tui flight after a software upgrade, a report said.
That meant that their average weight used for take-off calculations was lower than it should have been.
The difference could have had an impact on take-off thrust, but the report said flight operation was not compromised.
Take-off prep documents told the pilot that his Boeing 737 jet was 1,244kg lighter than it actually was after using 35kg as the average weight of the females involved rather than 69kg.
That “load sheet” is used to determine how much weight is safe, how the plane is balanced, and other important information for a safe flight within safety regulations.
The AAIB report on the incident said the reservation system that produces the load sheet had been upgraded in the downtime caused by the coronavirus lockdown last year, when the airline suspended flights “for several months”.
Safety officials said the problem was that the software had been programmed in a foreign country where “Miss” is used to refer to children, and “Ms” to adult women.
That simple language difference led to 38 adult women being considered “children” by the computer system – a difference which, when added to the rest of the passengers and cargo, led to the plane’s calculated and actual weights differing by more than a tonne.
The flight was an early-morning departure from Birmingham International to Majorca, on 21 July 2020.
The Air Accidents Aviation Branch (AAIB) said that in this case, the difference in the calculation of take-off thrust was out by only 0.1% – so “the safe operation of the aircraft was not compromised”.
Investigators said two other flights the same day also suffered from the same problem, but extra manual checks were immediately brought in to prevent the problem happening again.
And another upgrade of the computer system solved the problem.
In a statement, Tui said: “The health and safety of our customers and crew is always our primary concern. Following this isolated incident, we corrected a fault identified in our IT system.
“As stated in the report, the safe operation of the flight was not compromised.”
Live-audio app Clubhouse is creating plenty of chatter – about itself.
Elon Musk, Oprah Winfrey, Kanye West, Demi Lovato and Mark Zuckerberg are among the celebrities to have popped up on the service. And you can find chats about everything from Bitcoin and Buddhism to relationships and R&B music on it.
Even so, the idea that this one-year-old app could be worth $4bn (£2.9bn) is startling.
It stems from a Bloomberg report saying the San Francisco-based start-up is seeking fresh funds at this level.
But it was only in January that venture capital fund Andreessen Horowitz bought a stake valuing the firm at a quarter of the sum.
The jump may be justified by a follow-up report that Twitter has discussed buying the app for the higher price – although it declined to confirm this when asked.
Why would Clubhouse be worth so much?
Clubhouse had about 13.4 million users in late March, according to research firm App Annie, having added about a fifth of that number over the previous four weeks alone.
In short, it’s growing quickly – despite being “invite only” and limited to Apple’s iOS – and appears to have found a gap in the market.
“It’s at the intersection of several hot trends – audio, live and social,” said Joseph Evans from Enders Analysis.
“And it’s recreating some of the things we can’t do normally because of the coronavirus pandemic restrictions, such as attending a talk or having a group conversation.”
At present the app doesn’t make any money. But that’s not necessarily important.
As Sarah Frier’s book No Filter recounts, Mark Zuckerberg insisted Instagram take its time before introducing ads after Facebook bought the photo app in 2012 for a then-groundbreaking $1bn.
The reason, he said, was that it was more important to establish “staying power” first.
But Instagram only achieved its value because both Facebook and Twitter wanted to buy the business.
And that’s unlikely to be the case this time round.
“A few years ago Facebook would probably have already put an offer on the table [for Clubhouse], said Mr Evans.
“But it’s not in the market for another social network because of the competition scrutiny that it’s under, so Facebook’s only option is to compete with it.”
Indeed, Facebook has just launched a new web-based app of its own called Hotline, which lets hosts chat to their audience via audio and text.
What other rivals are out there?
Twitter has already launched Spaces, an audio-streaming feature inside the existing Twitter app.
It is being rolled out to select creators first, but the plan is to allow anyone to create a “space” later this month.
Chat app Telegram launched a voice chat feature last year, and revamped the feature in March to work like Clubhouse’s one-to-many dynamic.
And Discord – a sleeping giant in the voice comms space – has just launched Stages, where one speaker “on stage” can speak to many people at once.
Business-focused giants Slack and LinkedIn are known to be working on the idea too.
Many of these apps have much bigger audiences than Clubhouse ever has had – and are available on more platforms, including Android and PCs.
What other challenges does Clubhouse face?
Content moderation is set to be a big issue.
Clubhouse has already attracted controversy with reports of it being used by far-right personalities to discuss claims of women fabricating rape accusations, as well as instances of racism, sexism and anti-Semitism.
And this isn’t a good time to attract this kind of attention.
Politicians in the US are threatening to remove legal protections given to social networks under a law known as Section 230, after accusing them of bias and allowing harmful material to run rampant.
And in the UK, the proposed Online Safety Bill could soon give regulator Ofcom the power to block apps it judges to have failed to protect users.
Policing live audio is a lot harder than using algorithms to detect offensive text-based comments.
And while Clubhouse does retain audio recordings of chats if an incident is reported to it in “real-time”, it does not do so if a user tries to report a past offence, hindering any follow-up investigation.
Another risk is that Clubhouse might not prove “sticky” enough with its users.
One early adopter says she has found herself using it less and less because there are podcasts and other media available that do a better job of competing for her attention.
“Clubhouse makes it really easy for people to create content, but actually the content itself is surprisingly hard to use,” said Sharon O’Dea, an Amsterdam-based digital communications consultant.
“You can’t share it, you can’t record it, you can’t quote it, and it often takes speakers ages to get to their key point.
“It just feels to me like it’s it doesn’t respect my time as a consumer.”
How might Clubhouse become profitable?
The traditional way for social networks to make money is adverts.
But an audio-only format makes that difficult.
Would users stick around if forced to listen to pre-roll promotions? Would the natural flow of conversations be damaged by hosts having to pause for regular breaks, or worse ads simply playing over parts of discussions, in a similar way to how Twitch interrupts video gameplay?
As an alternative, the app’s creators have suggested they could take a cut of payments made by listeners to room hosts in order to thank them or access premium “ticketed” content.
Patreon, another start-up, has already built a business around this model, and was valued at $4bn in its latest funding round.
Clubhouse has just introduced a money transfer tool of its own – but for now has opted to let 100% of payments go to creators rather than taking a commission.
Twitter has launched a new emoji for the “Milk Tea Alliance” – a movement uniting Asian pro-democracy activists.
The alliance has brought together anti-Beijing protesters in Hong Kong and Taiwan with pro-democracy campaigners in Thailand and Myanmar.
The emoji, a white cup set against three different shades of a popular regional tea, marks a year since the #MilkTeaAlliance hashtag started.
One target is Chinese dominance, and Beijing says it is “full of biases”.
What is the Milk Tea Alliance?
The alliance has united campaigners pushing for greater democracy.
It emerged last year from a Twitter dispute between Chinese nationalists and a young Thai actor and his girlfriend, who were accused of supporting democracy in Hong Kong and Taiwanese independence.
The movement gained popularity among pro-democracy activists in Thailand who are calling for reform of the monarchy and protesters in Hong Kong who accuse Beijing of harming the territory’s democracy.
Use of the hashtag spiked again this February after a military coup in Myanmar led to mass protests.
“We have seen more than 11 million tweets featuring the #MilkTeaAlliance hashtag over the past year,” Twitter said in a post on the platform
The emoji will automatically show up when users post the #MilkTeaAlliance hashtag.
Twitter said: “During times of civil unrest or violent crackdowns, it is more important than ever for the public to have access to the #OpenInternet for real-time updates, credible information, and essential services.”
The announcement is a big moment for the movement, Netiwit Chotiphatphaisal, a leading Thai activist, told the BBC.
“It’s very good that Twitter has recognised what we have fought for for many years,” he said. “We have to fight not only our dictators but also with Chinese dominance.”
“The movement is happening not only online but offline. It’s made a big impact.”
Prominent activists from the region have celebrated the announcement online.
Previously, Twitter has created emojis for the #MeToo and #BlackLivesMatter movements.
What does China say?
The Chinese government has criticised the Milk Tea Alliance.
The movement has “consistently held anti-China positions, and is full of biases against China,” Chinese foreign ministry spokesman Zhao Lijian told a press conference on Thursday, Bloomberg reported.
Everyone has their breaking point. And when it comes it can be a small thing, an incident that usually wouldn’t matter.
Shannon Wait’s moment was when her Google-issued water bottle broke. The data centre she worked in was hot, so she asked for another one. However, she says the Google subcontractor refused to give her one.
That moment sparked a chain reaction that led to an announcement last week. Google signed a statement saying the company’s workers had the right to discuss pay and conditions with each other.
It might seem bizarre that even needed to be said.
But in actual fact it was the culmination of Shannon’s battle with the company.
Her story is one of management overreach, a story that shines a light on managerial practices that have become synonymous with Big Tech.
Shannon finished her history degree in 2018 and started working at a Google data centre in South Carolina the following February, earning $15 (£10.90) an hour.
“You’re fixing the servers, which includes swapping out hard drives, swapping motherboards, lifting heavy batteries, they’re like 30lb (13.6kg) each,” she says. “It’s really difficult work.”
Google’s offices are famed for being creative, alternative and fun – with table tennis tables, free snacks and music rooms. However, what Shannon describes sounds less idyllic.
“People aren’t playing games all day like you see in the movies… the data centre is completely different,” she says.
Shannon was a contractor at Google. That means that although she worked in a Google data centre, she was actually employed by a subcontractor called Modis, part of a group of companies owned by another firm, Adecco.
That complex arrangement has become increasingly common at Google. About half of the people who work for the company are reportedly employed as contractors.
It also makes working out who actually carries the can for managerial mistakes complex. But we’ll get to that later.
Shannon says when the pandemic hit, the work got harder. The minimum number of jobs per shift increased. But there was a sweetener.
“Around the time of May 2020, Google announced that they were going to handle the pandemic in an honourable way. They said that they were going to give bonuses to every employee, including contractors, who work in person,” she says.
“The time came that we were supposed to get that bonus and it never appeared in any of our bank accounts. We started getting concerned like, you know, I really could use this extra money.”
It was around this point that she says employees started talking to each other about the bonus, and how much they were entitled to receive.
“We started asking each other about pay, but any time it came up in front of management we were told not to talk about it.”
Shannon says she was even sent a message by a manager saying: “It is never ok to discuss compensation with your peers”. She shared it with the BBC.
Shannon did eventually receive her bonus, but says she had become disillusioned. She had hoped to get a full-time job at Google. However, she noticed a culture of “perma-temps”, temporary staff who she says would never get made staff, no matter how much they tried.
Frustrated by management, Shannon reached what she says was her breaking point.
“It’s very hot in the data centres – about 85F (29.5C). So Google issued me a water bottle, but the cap on it broke.”
She says the same thing happened to her colleague, a full-time Google employee. However, Shannon says although her colleague was given a new bottle, she wasn’t. She went home and typed up a Facebook post.
Eventually, she says she had “had enough”.
“The next day, I was at work, I got called into a conference room with all, for the most, the managers present. And they told me that my Facebook post was in violation of the non-disclosure agreement, and that I was a security risk and needed to hand over my badge and my laptop immediately, and be escorted off site.”
The Alphabet Workers Union was set up in January 2021 for Google workers. It is not recognised by the National Labor Relations Board, an independent government agency, and is sometimes referred to as a “minority union”. The vast majority of Google workers aren’t members, but Shannon was and the union took up her case.
In February, they filed two cases on her behalf under unfair labour practice laws. One that she had been suspended illegally – for talking about supporting a union. The other that her managers had asked her, illegally, not to discuss her wages.
Last month Google, Modis and the Alphabet Workers Union reached a settlement.
Shannon’s suspension was overturned.
Google signed a document saying its employees “have the right to discuss wage rates, bonuses, and working conditions”.
It was a victory for both Shannon and the newly-formed union.
“People who work in warehouses and data centres for these trillion dollar companies are tired of even their smallest rights being trampled on. And they’re realising that the companies aren’t listening to their workers. So we’re going to make them.”
Last week, Amazon workers in Alabama voted on whether they should unionise. Amazon is desperate to avoid workers unionising.
The result is expected soon. It’s the latest battle between Big Tech and some workers who feel, to put it mildly, unloved.
“I think that one of the biggest things that people can learn is that not all Google employees make six figures… and even on the lowest level at Google, they have so much power – so much more power than they realise,” said Shannon.
And as for Google?
Well it didn’t admit any wrongdoing as part of the settlement, and didn’t admit to being a “joint employer” of contract staff. The BBC put Shannon’s story to Google but it said it had nothing further to add. Adecco has not responded to a BBC request for comment.
Shannon doesn’t want to return to a Google data centre, and ultimately wants to do a PhD in history. But she has already contributed to the history books, a rare win by an employee against a tech giant.
Computing giant Microsoft recently put out a report claiming that businesses globally are neglecting a key aspect of their cyber-security – the need to protect computers, servers and other devices from firmware attacks.
Its survey of 1,000 cyber-security decision makers at enterprises across multiple industries in the UK, US, Germany, Japan and China has revealed that 80% of firms have experienced at least one firmware attack in the past two years.
Yet only 29% of security budgets have been allocated to protect firmware.
However, the new report comes on the back of a recent significant security vulnerability affecting Microsoft’s widely-used Exchange email system.
And the computing giant launched a range of extra-secure Windows 10 computers last year that it says will prevent firmware from being tampered with.
So is this just an attempt to divert attention and sell more PCs, or should businesses be more worried?
How a firmware attack works
Firmware is a type of permanent software code used to control each hardware component in a PC.
Increasingly, cyber-criminals are designing malware that quietly tampers with the firmware in motherboards, which tell the PC to start up, or with the firmware in hardware drivers.
This is a sneaky way to neatly bypass a computer’s operating system or any software designed to detect malware, because the firmware code is in the hardware, which is a layer below the operating system.
- Microsoft hack: 3,000 UK email servers unsecured
- Hundreds of UK firms hacked in global cyber-crisis
- Cyber-flaw affects 745,000 pacemakers
Security experts have told the BBC that even if IT departments are following cyber-security best practices like patching security vulnerabilities in software, or protecting corporate networks from malicious intrusions, many firms are still forgetting about the firmware.
“People don’t think about it in terms of their patching – it’s not often updated, and when it is, sometimes it breaks things,” explains Australian cyber-security researcher Robert Potter.
Mr Potter built the Washington Post’s cyber-security operations centre and has advised the Australian government on cyber-security.
“Firmware patching can sometimes be tricky, so for a lot of companies, it’s become a blind spot.”
There have been several major firmware attacks discovered in the last two years, such as RobbinHood, a ransomware that uses firmware to gain root access to a victim’s computer and then encrypts all files until a Bitcoin ransom has been paid. This malware held the data of several US city governments hostage in May 2019.
Another example is Thunderspy, an attack that utilises the direct memory access (DMA) function that PC hardware components use to talk to each other.
This attack is so stealthy that an attacker can read and copy all data on a computer without leaving a trace, and the attack is possible even if the hard drive is encrypted, the computer is locked, or set to sleep.
“If device firmware has no protection in place, or if the protection can be bypassed, then firmware compromise is both incredibly serious and potentially invisible,” explains Chris Boyd, a malware intelligence analyst at security firm Malwarebytes.
“Remote or physical compromise which permits rogue code to run can set the stage for data theft, system damage, spying, and more.”
Big organisations beware
The good news is that firmware attacks are less likely to target consumers, but big firms should beware, according to Gabriel Cirlig, a security researcher with US cyber-security firm Human (formerly White Ops).
“It is a big deal, but fortunately it only works against big organisations, because you need to target specific types of motherboards and firmware,” he tells the BBC.
Typically, cyber-criminals tend to attack operating systems and popular software, because they only make money if they can infect the biggest numbers of end users.
Firmware attacks are less common and more complicated to implement than other types of cyber-attacks, but unfortunately the coronavirus pandemic has accelerated the problem.
The National Institute of Standards and Technology (NIST), an agency within the US Department of Commerce, continually updates a National Vulnerability Database (NVD) with new security flaws.
The database has recorded a five-fold increase in attacks against firmware in the last four years.
Coronavirus lockdowns in multiple countries have led to multiple employees working from home and connecting remotely to work servers. Each one of those computers and mobile devices is an opportunity.
Carrying out a firmware attack might be complex, says Mr Cirlig, but if attackers could silently steal critical information from a c-suite executive’s laptop, like passwords, they could then use it to infiltrate a company’s networks and steal more data.
Nation-state hackers would be most likely to use such an attack, he adds.
“This is a big operation with big pay-offs – it’s not something that a small group of cyber-criminals has the manpower to do.”
Creeping soon to a network near you
Although firmware attacks are not as ubiquitous as phishing scams, malware or other cyber-attacks, the cyber-security experts the BBC spoke to say now is the time for businesses, and the technology industry as a whole, to pay attention to hardware security.
“Firmware attacks are not common on a day-to-day basis, but that’s because people don’t realise they’re being infected by such an attack,” says Mr Boyd.
“It’s like when ransomware first came onto the scene – people didn’t know of anyone who was infected by it, and if big organisations were, they wouldn’t tell anyone about it, as there was an element of shame, not wanting their clients to know they’d been infected.”
Mr Boyd adds that a new generation of “budding hardware enthusiasts” who have been learning their way around firmware by “modding video game consoles over the last decade” could well pose additional threats to enterprise cyber-security going forward – a point Mt Cirlig fervently agrees with, since he hacked the firmware in his own car when he was younger.
“Microsoft is right to raise this as a major issue, because we need to bring firmware designers and operational technologies along the journey of cyber-security, the way we have with software companies,” says Mr Potter.
“As we connect more things to the internet, we’re connecting a lot more devices that haven’t been designed with cyber-security in mind. And if the trend continues, bad guys will go after it.”
‘Fake’ accounts claiming to be Amazon workers have been praising their working conditions on Twitter.
Votes are currently being counted in Alabama to decide whether Amazon warehouse workers will form a union.
But last night, a series of anti-union tweets were sent from accounts claiming to be staff.
Twitter has now suspended many of the accounts, and Amazon has confirmed at least one is fake.
Most of the accounts were made just a few days ago, often with only a few tweets, all related to Amazon.
“What bothers me most about unions is there’s no ability to opt out of dues,” one user under the handle @AmazonFCDarla tweeted, despite a state law in Alabama which prevents this.
“Amazon takes great care of me,” she added.
Another account – which later changed its profile picture after it was revealed to be fake – said: “Unions are good for some companies, but I don’t want to have to shell out hundreds a month just for lawyers!”
Many of the accounts involved used the handle @AmazonFC followed by a first name.
Amazon has previously used this handle for its so-called Amazon Ambassadors – real employees who are paid by the firm to promote and defend it on Twitter.
An Amazon spokesperson told the A Marketing Media that Darla was not an official Ambassador, but has not responded to the A Marketing Media request for clarification on several other accounts.
“It appears that this is a fake account that violates Twitter’s terms,” the spokesperson said. “We’ve asked Twitter to investigate and take appropriate action.”
Several of the high-profile accounts have been suspended by Twitter. It told the A Marketing Media that Amazon Ambassadors are subject to Twitter’s rules on spam and platform manipulation.
Accounts which impersonate or falsely claim to be affiliated with a company, can be temporarily suspended or removed.
Any parody account should have a disclaimer in its Twitter bio, the company added.
It is unclear whether the accounts are real employees, bots or trolls pretending to be Amazon Ambassadors.
Spending on social and collaborative applications will likely see double-digit growth this year, according to a Gartner forecast, as businesses focus on keeping remote workers connected during the pandemic and beyond.
While videoconferencing tools such as Zoom, Microsoft Teams, and Cisco Webex were the big winners of the global shift to remote working during 2020, there has also been an uptick in interest in other types of platforms that connect workers and facilitate ad-hoc communications and “watercooler”-style interactions.
Revenues in three software market segments – collaborative work management, enterprise social networks. and employee communication software – are set to reach almost $4.5 billion in 2021, a 17% increase from $3.8 billion in 2020, according to Gartner’s “Forecast Analysis: Social and Collaboration Software in the Workplace, Worldwide.”
The five-year revenue forecast predicts that revenues will increase by double-digit percentages each year to $6.9 billion in 2024; the research firm increased its long-term revenue growth forecast in comparison to its 2019 expectations due to the global shift toward remote work. Other drivers behind the increase include a rise in the number of knowledge workers globally, according to Gartner.
“The need to suddenly empty out all the office buildings, while keeping businesses afloat, gave a jolt to many markets, with social and collaboration being one of those at the forefront,” said Craig Roth, research vice president at Gartner and one of the report’s authors. “Social and collaboration products went from ‘nice to have’ to ‘must have’ within a period of a few weeks.”
Of the three areas covered in the report, collaborative work management tools – including the likes of Asana, Trello and Monday – were the biggest driver of revenue growth, as businesses sought to digitize “non-routine” task management and coordination processes that would otherwise require back-and-forth emails. “Tracking and coordination of work outside of formal project plans became a lot more difficult when you couldn’t just ask for status over a cube wall. These tools, which were on an upswing anyway, got an extra boost [during the pandemic],” said Roth.
The report also forecasts that collaborative and social functionality will increasingly be embedded into CRM, HR and ERP business applications. Gartner predicts that 65% of enterprise application software provides will include collaboration features in their products by 2025.
That trend is well under way. Microsoft recently announced that its Teams collaboration tool will be made accessible in elements of its Dynamics 365 portfolio, for example, while Salesforce’s pending acquisition of team chat pioneer Slack is likely to bring tight integration across its suite of business apps.
The advantage of embedding social and collaboration functionality is the ability to include non-formalized processes, such as conversations or content sharing, in more process-oriented business tools.
“Going forward, we’ll start seeing [collaboration] products integrated more into packaged business applications, as opposed to being a siloed product on the side that, every now and then, you exit your business application to do some collaboration, and then go back to ‘work,’” said Roth. “It shouldn’t feel that way.”