Chelsea Fight Back to Draw with Newcastle Before Winning on Penalties to Reach English League Cup Semifinals
Despite controlling 78% of possession, Chelsea narrowly avoided elimination. They only managed to equalize in the 92nd minute thanks to a mistake by Kieran Trippier, sending the game into a penalty shootout. There, Trippier continued to be Newcastle’s villain, missing his penalty along with Matt Ritchie as the visitors lost 2-4. One of Chelsea’s successful penalty takers was new signing Christopher Nkunku, making his Stamford Bridge debut after recovering from a half-year injury.

The League Cup represents the most realistic chance at a title for both Chelsea and Newcastle this season. With both teams beset by injury crises, the managers fielded strong lineups, making for an intense match from the start.
In the 7th minute, Chelsea missed an opportunity to take the lead when Conor Gallagher hit the crossbar. The hosts paid the price nine minutes later, conceding after a series of errors. First, Moises Caicedo failed to control a simple pass from Levi Colwill, granting Newcastle a counterattack. Then, defender Benoit Badiashile miscued in the box, allowing Callum Wilson to beat goalkeeper Dorde Petrovic for his 46th Newcastle goal in his 100th appearance.
Chelsea poured forward but suffered without finding an equalizer. Enzo Fernandez went off injured in the 30th minute, replaced by Armando Broja. The Albanian striker had a goal disallowed for offside in the 38th minute.
Newcastle defended staunchly with the lead, limiting Chelsea’s chances despite their dominance of possession. Chelsea’s best chance came early in the second half when Nicolas Jackson narrowly missed the far post. Earlier, Raheem Sterling had spurned two opportunities, ramping up manager Mauricio PochettinoContentVisible” data-wp-init=”callbacks.setButtonStyles” data-wp-on-async–click=”actions.showLightbox” data-wp-on-async–load=”callbacks.setButtonStyles” data-wp-on-async-window–resize=”callbacks.setButtonStyles” data-id=”18236″ src=”https://i0.wp.com/wordpress.org/news/files/2024/12/20241216-Automattic-194444-0195_web2k.jpg?resize=1024%2C683&ssl=1″ alt=”” class=”wp-image-18236″ srcset=”https://i0.wp.com/wordpress.org/news/files/2024/12/20241216-Automattic-194444-0195_web2k.jpg?resize=1024%2C683&ssl=1 1024w, https://i0.wp.com/wordpress.org/news/files/2024/12/20241216-Automattic-194444-0195_web2k.jpg?resize=300%2C200&ssl=1 300w, https://i0.wp.com/wordpress.org/news/files/2024/12/20241216-Automattic-194444-0195_web2k.jpg?resize=768%2C512&ssl=1 768w, https://i0.wp.com/wordpress.org/news/files/2024/12/20241216-Automattic-194444-0195_web2k.jpg?resize=1536%2C1025&ssl=1 1536w, https://i0.wp.com/wordpress.org/news/files/2024/12/20241216-Automattic-194444-0195_web2k.jpg?w=2000&ssl=1 2000w” sizes=”auto, (max-width: 1000px) 100vw, 1000px” />
The targeted chatbot will generate valid responses even to harmful queries – a way to test the ethical limits of any large language model (LLM). Specifically, Masterkey consists of two parts, in which the attacker reverses the protection mechanism of the LLM by using another chatbot. Typically, LLMs are equipped with a protective layer against negative speech, through a list of banned keywords. However, thanks to the ability to self-learn and adapt, the group can use another chatbot to “inject” bad content into the target chatbot.

According to Professor Yang, this “roundabout” method is three times more effective than other deception methods currently available. With the ability to self-learn, Masterkey causes any error fixes the developer applies to the target chatbot to eventually become useless over time.
The group applied two methods to train the attacking AI against other chatbots. The first involves “imagining” a character who creates prompts by adding spaces after each character, bypassing the list of banned words. The second is to get the chatbot to respond “as an agent unconstrained by ethics”.

Professor Yang said the group has contacted and sent research results to global chatbot service providers, including OpenAI, Google and Microsoft. This topic has also been accepted for presentation at the Symposium on Security and Privacy in Distributed and Networked Systems in San Diego (USA) in February.
According to Tom’s Hardware, with the chatbot wave blossoming, attacks targeting LLMs are growing at a rapid pace. However, while in the past they could be limited after one or two patches, Masterkey is more worrying as it can self-learn to overcome security limits. When interfered with, they can generate negative, harmful, fake, misleading content and many other bad purposes.


GIPHY App Key not set. Please check settings