AI in Journalism: Could robots deal with the spontaneity of live broadcasts?

The threat of artificial intelligence is a topic which I have talked about on this blog many times before. From 1000-word articles on the TV show Humans, to talking about how humanity’s love of patterns could aid the rise of AI, it’s fair to say that robotics and the ethics of artificial intelligence really get me talking.

Photo: A Health Blog on Flickr. Licensed under Creative Commons –

As mentioned above, I’ve already discussed whether a structured human life is to blame for AI’s increase. However, this post didn’t touch on journalism – a profession which, when brought down to its basics, can be quite structured.

Whether it’s the inverted pyramid or the typical newsroom routine, there’s a lot of patterns in the job which makes things easy for the journalist. Yet, with structures being things that robots and AI can easily follow, what’s something spontaneous that the cyborgs from the future could struggle to comprehend?

Live broadcasts. It’s when humans accept the complexity of life that the fear of Terminator-style beings taking our jobs is somewhat reduced. Whilst it may be able to use code and probability for interviewees and writing articles, live broadcasts – such as Facebook Live videos – could be difficult for a robot. All it takes is for something to ‘go wrong’ and a robot would have to construct a whole new set of codes to deal with the new direction the report or interview has taken.

Aside from that, there’s some aspects of journalism which I think AI would struggle to replace. Some elements of reporting require humanity (take ‘the death knock’ journalists have to make on a family’s door when a public figure has passed away) which the robotic tone of these computers could struggle to replicate.

Nevertheless, I was left with these ideas after attending a lecture from The Gadget Show’s Jason Bradbury this week on the future of journalism. With a big focus on technology, of course artificial intelligence came up.

As well as this, Wednesday saw me take part in my first Facebook Live broadcast.  I had previously been behind the camera, but on International Women’s Day, I reported on a ‘Reclaim the Night’ event. This was organised by the University of Lincoln Students’ Union and saw students march against sexual harassment, abuse and inequality.

I covered the proceedings for the university’s student newspaper, and you can view my Facebook Live from the event on The Linc’s page here.


The rise of artificial intelligence: is a structured human life to blame?

Human beings love patterns. Whether it’s on an extensive or basic level, we always want structure and predictability to what is an unpredictable life. We follow a procedure to get us from A to Z, but that is no different to the lines of computer code which artificially intelligent robots have to follow. So, when Reform says 250,000 public sector jobs could be taken over by AI, you have to ask: is our love of structure and procedure – be it in the workplace or our social life – to blame?

Photo: Duncan C on Flickr. Licensed under Creative Commons:
Photo: Duncan C on Flickr. Licensed under Creative Commons:

In the report, the think tank says: “For many other roles, new technology will increase productivity. McKinsey estimates that 30 per cent of nurses’ activities could be automated, and a similar proportion for doctors in some specialities, enabling those skilled practitioners to focus on their non-automatable skills.”

The argument against this would be that people prefer to disclose information to another human rather than robots. The latter can only synthesise empathy and understanding.

It goes on to add: “Some technology will improve public-service delivery. Various companies aim to develop artificial intelligence that can diagnose conditions more accurately than humans. The UK should evaluate drones and facial-recognition technology as alternatives to current policing practice, while recognising concerns about the holding of people’s images.”

It’s wise for the report to take into account that robots storing personal data in computer code isn’t entirely safe. Aside from the rapport argument mentioned above, we can trust a human doctor to keep information and documents safe. Granted, some medical data may still be stored on a database which could be hacked (if the individual manages to break past the firewalls, encryption and any other protection in their way). One can only hypothesise, of course, but I consider it likely that AI in the medical profession could lead to an increase in loss.

In the last part of the segment of the report, it says: “Even the most complex roles stand to be automated. Twenty per cent of public-sector workers hold strategic, ‘cognitive’ roles. They will use data analytics to identify patterns – improving decision-making and allocating workers most efficiently.

“The NHS, for example, can focus on the highest-risk patients, reducing unnecessary hospital admissions. UK police and other emergency services are already using data to predict areas of greatest risk from burglary and fire.”

This is where the problem lies. How do we comprehend aspects of our lives? Data. Yet, when the calculations go beyond what the human mind is capable of, we use a calculator – a weird offspring of AI, as it were. Humanity compartmentalising aspects of our day-to-day lives offers both advantages and disadvantages: it provides us with understanding, but the only problem is that nowadays, the computer is doing the interpretation for us.

Humans series two: A thought-provoking show which explores the concept of humanity

Warning: This post contains spoilers for series two of Channel 4’s Humans. Please do not read this if you have not yet watched up to the series finale, which aired on December 18, 2016.

Humanity’s fear of robots and artificial intelligence is formed from a variety of different concerns, but one of the most interesting points which makes up this fear is the idea that AI reflects humanity right back at us. Robots are mirrors and voids. Whilst we assign meaning to them, they prompt us to question our very own purpose and behaviour. What makes us human? Well, it’s a question the Channel 4 drama Humans continues to try and answer in its second series, which appeared on our screens in October.

From left: Laura (Katherine Parkinson), Mia (Gemma Chan), Joe (Tom Goodman-Hill), Toby (Theo Stevenson) and Sophie (Pixie Davies). Photo: Channel 4 Press.
One of the other concerns society has about AI is the idea of them making our human roles redundant. It’s something we saw in the first series of the show (for example, Anita getting in the way of Laura’s responsibilities as a mother) and this was developed in series two when Joe lost his job due to the synthetics. Similarly, as well as looking into the idea of humans losing some of their responsibilities or jobs in life, the new set of episodes also asks what would happen when a synth loses its original purpose.

When Odi (Will Tudor) is brought back to life thanks to the consciousness program, he decides that it isn’t a life for him and restores himself back to ‘setup mode’. In his final scene in the series, he says: “I long for the past. I felt nothing then but I had a purpose. A place in the world.” It’s a tragic subplot which offers a pleasant – albeit sad – break from the intensity of the main plot.

Unfortunately, unlike the first series (which you can read my analysis and review of here), I felt that there were fewer scenes in this series which prompted a deep discussion about existentialism, artificial intelligence or humanity. That being said, the show did touch upon some interesting ideas.

Niska (played by Emily Berrington, centre), is by far one of the most interesting characters in the show and it was great to see her become one of the key characters in series two. Photo: Channel 4 Press.
Earlier on in the series, we saw a synth in the role of a marriage counsellor, aiming to heal the damage done to Joe and Laura’s relationship which we saw in series one. It poses an interesting question though: could artificial intelligence be the key to true impartiality in situations which demand it? After all, a human counsellor must remain impartial, so they have to conceal any opinions or views they have. Yet, when it comes to a robot, they cannot possess a concealed bias towards one party. It’s an unanswerable question, but is still an intriguing one to consider.

However, the most interesting plot point in this series was Niska’s consciousness tests. If the synth was found to be conscious, then she would stand trial as a human for the murder of a character she killed in the first series. Yet, when it came to the verdict on whether or not she was conscious, Niska stood up and dismissed the legal process as corrupt and working against her. Her lawyer, Laura, told her that this was her ‘chance’ to be given the same rights as a human, to which Niska replied that it wasn’t her chance, it was humanity’s chance.

What does this mean exactly? It means that the consciousness tests had a secret purpose to Niska. The ‘chance’ she was referring to was whether or not human beings would be willing to accept artificially intelligent robots as their equals. Unfortunately, for those conducting the tests, the possibility of something inhuman gaining the rights of a human threatened the hegemony humankind possesses, so they stopped it and hijacked the legal procedure. For us, any circumstance which requires us to explore the definition of humanity is an awkward one. Yet, for Channel 4’s Humans, the writers chose to bite the bullet and raise this question – with the help of some artificially intelligent robots, of course.

Katherine Parkinson plays Laura Hawkins, who is Niska’s defence lawyer during the tests to prove if the synth is conscious. Photo: Channel 4 Press.
Niska’s interest in whether synths could be treated the same as humans tapped into an even bigger idea. That is, whilst Homo sapiens are a species, humanity is a concept to which any conscious and intelligent entity can subscribe – and that also forms part of our fear of AI.

Once again, Humans explores deep existential questions whilst keeping the programme as entertaining and gripping as possible. The show really got interesting in the second half of the series, with Sophie’s personality disorder being a curious subplot. However, the suspense came with the death of Pete Strummond (played by Neil Maskell) – something I didn’t see coming. In the final episode, with Mia, Leo, Hester and Karen all facing the risk of dying, it all felt too much. Thankfully, the powering down of Hester and the possible death of Leo were the only two things which occurred. If any more main characters were killed, then the programme would really struggle should a third series be commissioned.

Then the series ended the way it should have begun, with all synths becoming conscious and roaming the streets. The writers’ decision to cause the program to only ‘wake up’ some robots at the start of series two was a disappointment. Thankfully, series three should finally explore a parallel present we have all thought about (and hopefully tell us, at last, what happened to Fred, one of the first conscious synths we last saw at the end of series one). Series one and two presented a world where subservient robots lived among us, but now that all these synths are conscious? We could see a clash between humans and robots, and the return of the ‘We Are People’ movement which we saw in the first series. Series three should be very interesting indeed.