Trump’s social media ban in perspective – the unpalatable difficulties of regulating political and media activity in the internet age

8th January 2021

Once upon a time, and not so long ago, mass political parties and national media organisations were themselves novelties.

Both were responses to the emergence of popular democracy and widespread literacy in the late 1800s.

Political parties and media organisations (for example, ‘Fleet Street’) were ways by which the relationships were mediated between the elite and the governed.

The means of political organisation and of publication – and, later, of broadcasting –  were in the hands of the few.

Indeed, until the 1990s, it was difficult (if not impossible) for any person to publish or broadcast to the world, without going through the ‘gatekeepers’ of a national newspaper, or a publishing house, or a national broadcaster.

Similarly, it would be difficult (if not impossible) for any person or group of people to obtain significant political influence – at least in the United Kingdom as a whole – without going through a national political party.

So – although both politics and the media on a national level had opened up to the population as whole – the ultimate means of political and media control were still quite centralised.

Top-bottom, command-and-control.


And when power is concentrated it is easier to regulate.

So, just as modern political parties and media organisations emerged at the end of the 1800s, so did the regulation both of political parties and of the media.

Back in October 2019 I set out at Prospect why the electoral law of the United Kingdom that was developed in different circumstances was no longer fit for purpose.

Similar points can be made about media law: for example, there is no real point tightly regulating certain news titles or national broadcasters when the same content can be circulated – often even more widely – on social media platforms by those outside such creaking regulatory regimes.


If traditional political parties and media organisations did not already exist as hangovers from the time before modern technology and communications, they probably would not now be invented, at least in a recognisable form.

And that therefore must follow for how political and media activities are regulated.

Just as traditional political parties and media organisations were once novel responses to new social and economic conditions, we need to think afresh about the nature of political and media power and about the extent, if at all, it can be regulated.

For now anyone with an internet connection and access to certain platforms can publish and broadcast to the world, or can seek and obtain significant political influence or power.


To ‘regulate’ a thing is to make it possible that the thing would have a different outcome, but for the regulation.

If a regulation can have no effect, then the thing supposedly being regulated carries on regardless, and the regulation is a polite fiction. 

Futility is the enemy of sound regulation.


And now we come to President Donald Trump and his recent temporary ban from Twitter and his indefinite ban from Facebook.

Neither Twitter nor Facebook are traditional media organisations – indeed both were formed within the lifetime of anyone reading this post.

But they are not only media organisations – they have also taken on some of the functions of traditional political parties – as the practical means of political organisation, mobilisation and sharing of information.

This is not to say that the social media platforms are beyond the law – they are (in theory) subject to terms and conditions, laws on equality and non-discrimination, laws on data protection and intellectual property, and so on.

It may be that these general laws are not enforced, or perhaps not enforceable – but there are laws which apply.

The issue is that those laws are general laws and not specific legal regimes covering media and political activity.

And so what we have are platforms of immense media and political power – and without any specific media and political regulation.

They are, in effect, private organisations – and (subject to general laws) are entitled to suspend and terminate, or to enable, the accounts of any politician.

They can even suspend the social media account of (arguably) the most powerful politician in the world.

And they have done so.


For many, the way to deal with the political and media power of social media platforms is easy.


Something must be done, and so something will be done, and that something that will be done will be to ‘Regulate!’

But asserting that a thing should be regulated is not the same as it being capable of regulation.

One may want the tides of the sea or the weather to be different, but it does not follow that they can be made any different.

So it may be that although social media platforms – huge private corporations – have immense political and media power, it does not follow that they can be easily regulated, or regulated in any meaningful way at all.

And even if regulation was possible, it is almost certain that it cannot be on the same basis of the top-down, command-and-control regulation of political and media activity that we have inherited from previous times.

For example, social media platforms have millions of publishers and broadcasters, not just a handful.

There are no elaborate steps before publication and broadcast as with a Fleet Street title or established book publisher.

They are no limits on how much political propaganda can be published and to whom it can be circulated.

If any of this can be ‘regulated’ then it almost certainty will not be by tweaking old pre-internet regulatory models – and this is because the things being regulated are of a fundamentally different nature.

And – and this will be very hard to accept for those who believe every real-world problem has a neat legal solution – it may be that social media activity can no more be regulated meaningfully than conversations in the street or in the town square.

That the age of specific regulations for media and political activity are over, and all we are now left with are general laws.

Many will not be comfortable with this – and will insist that ‘something must be done’.

Yet futility is the enemy of sound regulation.


Perhaps something should have been done in respect of President Donald Trump’s unpleasant, dishonest, reckless and dangerous use of his social media account before this week.

And what has now been done is too little, too late.

Others would say that silencing an elected politician’s means of communication should not be at the fiat of a private social media platform.

Views will differ.

But the wider questions are:

If a thing is to be done about the use and abuse of a social media platform by those with political and media power, who should have the power to do this?

And on what basis should they make that decision? 

And to whom (if anyone) should that decision-maker be accountable?

And if the social media platforms themselves are left to regulate what political and media activity can take place and what content we can read and watch, who (if anyone) can regulate them?


‘Quis custodiet ipsos custodes?‘ – who watches the watchmen? – is one of the oldest and most difficult questions in the history of organised societies, and it is a question that sometimes has no answer.

And now our generation gets to ask and to try and answer this question.



Later on the day of this post, Trump’s Twitter account was permanently suspended.



If you value the free-to-read and independent legal and policy commentary please do support through the Paypal box above.

Suggested donation of any amount as a one-off, £1 upwards per post found useful or valuable, or £4.50 upwards on a monthly profile.

This law and policy blog provides a daily post commenting on and contextualising topical law and policy matters – each post is published at about 9.30am UK time.

Each post takes time, effort, and opportunity cost.

Or become a Patreon subscriber.

You can also subscribe to this blog at the subscription box above (on an internet browser) or on a pulldown list (on mobile).


Comments Policy

This blog enjoys a high standard of comments, many of which are better and more interesting than the posts.

Comments are welcome, but they are pre-moderated.

Comments will not be published if irksome.

38 thoughts on “Trump’s social media ban in perspective – the unpalatable difficulties of regulating political and media activity in the internet age”

  1. Something can be done do long as it isn’t unduly prescriptive. For instance one could make the companies liable for what is published on their sites. By all means they can require an indemnity from the publishers….which would force the companies to know their clients better. But if the client cannot be found then the company will assume the liability.

    1. The print media have never, to my knowledge, been held responsible for publishing job adverts that, directly or indirectly, breach equality legislation by the manner in which they are worded.

    2. “one could make the companies liable for what is published on their sites”

      That would immediately kill sites as different as Twitter, Reddit, and Wikipedia.

  2. General where people are banned it is based on a violation of the rules of the specific site.

    Would an important first step to regulation be ensuring such rules are applied in accordance with natural justice?

    Rather than as the organisation see’s fit.

    This would not be creating new regulation, it would just be ensuring that sites abided and fairly applied the rules they had set for themselves.

    1. It already is the case that, where discretionary powers are conferred by a private contract, the courts will often hold there to be an implied term that the power will be exercised in a way that is fair, reasonable, in good faith, and takes into account all relevant matters and no irrelevant ones (Braganza v BP Shipping Ltd [2015] UKSC 17). In relation to natural justice specifically, you may be interested to read e.g. Dymoke v Association for Dance Movement Pyschotherapy UK Ltd [2019] EWHC 94 (QB) ( and AB v University of XYZ [2020] EWHC 2978 (QB) ( As was said in Braganza, the courts will be particularly concerned to imply such a term where there is an imbalance of bargaining power between the parties (such as, in that case, a relationship of employee and employer). A similar imbalance may, in my view, be considered to exist in relation to social media platforms given the enormous control they have in today’s society over the dissemination of information and ideas (as DAG has explained above).

      1. Fascinating, thank you. I am aware that for example Architects in building contracts have to be impartial in relation to certain functions but did not know this was a more general rule of Contract.

        I note that this only relates to where is a discretion is exercised. A failure to exercise a discretion (i.e. a failure to enforce the rules against another service user) is not clearly caught. Although I would say it should be.

        Nor is it clear if any user would have sufficient interest to bring a claim for a failure to exercise discretion against another service user. Again I think they would but it is not clear.

        Finally exercising private law rights costs a lot of money. So if the law is sufficient then people need the support to enforce it.

        Further any action is unlikely to result in compensation of any kind (the loss appears impossible to quantify, specifically where the service is free like twitter), a fixed nominal amount of compensation could be legislated for.

        1. I think it’s almost certain that one user of e.g. an online service would not have standing (‘locus standi’) to require the service provider to enforce its contractual rules against another service user. The only exception seems to be in cases where there is an express term to that effect – for example, in a leasehold contract containing various covenants, each lessee might expressly be given the right to require the landlord to enforce one of the covenants against any other lessee. Such a term was considered in Duval v 11-13 Randolph Crescent Ltd [2020] UKSC 18 ( But in the very different world of online platforms, I think there is not an iota of a chance that e.g. Twitter would voluntarily agree to give users such a right. So, if this were to be done, it would have to be by legislation, and Twitter would no doubt say that it should have the right to choose for itself how it enforces its rules – it would be extremely burdensome if any old user could go to Twitter and say ‘my friend wrote something mean about me. I therefore require you to delete their account’.

        2. The duties of architects that you mention are anomalous. An architect is not a party to a building contract. Yes he/she acts as agent, but the requirement to be impartial is specific to the architect and not an obligation on his/her client.

  3. I have long regarded political parties as fundamentally non-democratic, but I don’t know how they could be replaced. 650 independent MPs might be more democratic, but how they would form a government or formulate policy isn’t clear. I do wonder if a fully elected second chamber could be composed of independents. But even then, they’d have to stand for something to contest an election and, I suspect, would promptly organise themselves into groups (the jam first & cream first research groups maybe?).

    1. The idea of a Parliament of Independent Members does invariably founder on the issue of how would an Independent get elected.

      Before party structures became formalised, those with the deepest pockets controlled many a seat in Parliament.

      And as for Independent Knights of the Shires, there are quite enough Sir Talbot Buxomlys sitting on both sides of the House, already.

      On the upside, the Civil Service would, as a disciplined, salaried body, be able to run rings around a Parliament of Independents, assuming one might be capable of forming a Ministry, so may be it would not be such a bad idea after all, eh, Sir Humphrey?

      1. You’re probably right if the system continues to be FPTP.

        Whether you are independent or a member of a party matters much less in STV PR.

  4. And yet, are Mark Zuckerberg and Jack Dorsey, to some degree, that far removed from the amorality of Anthony Trollope’s Tom Towers in The Warden?

    Towers edits The Jupiter, a fictional take on The Thunderer, known more prosaically as The Times.

    Trollope enjoyed such literary conceits. Charles Dickens is referred to as Mr Mawkish Sentiment.

    The Jupiter is a powerful organ.

    “He loved to watch the great men of whom he daily wrote, and flatter himself that he was greater than any of them. Each of them was responsible to his country, each of them must answer if inquired into, each of them must endure abuse with good humour, and insolence without anger. But to whom was he, Tom Towers, responsible? No one could insult him; no one could inquire into him. He could speak out withering words, and no one could answer him: ministers courted him, though perhaps they knew not his name; bishops feared him; judges doubted their own verdicts unless he confirmed them; and generals, in their councils of war, did not consider more deeply what the enemy would do, than what The Jupiter would say. Tom Towers never boasted of The Jupiter; he scarcely ever named the paper even to the most intimate of his friends; he did not even wish to be spoken of as connected with it; but he did not the less value his privileges, or think the less of his own importance. It is probable that Tom Towers considered himself the most powerful man in Europe; and so he walked on from day to day, studiously striving to look a man, but knowing within his breast that he was a god.”

    Zuckerberg and Dorsey’s moneymaking machines, at least we no longer seem to be referring to them as a cross between a hippy and an old school philanthropist, may be compared with a circus tent wherein the ringmasters give free rein to the acts, but when they choose they may crack the whip or, on a whim or under some form of external pressure, ban an act completely.

    They edit or self regulate, if you prefer, when it suits them, but at other times they protest that like a ringmaster they only allow the acts into the ring, they do not tell them how to perform.

  5. There is an interesting parallel with competition law. It is self-evident that Amazon and other retail platforms wield enormous monopoly power, often to the detriment of the small sellers who sell over their platform. However anti-trust (USA) and monopoly (UK) laws cannot touch Amazon because they are based on a price model. Traditionally monopolists used their market power to increase prices to the detriment of consumers. But Amazon is often cheaper than competitors, and offers better service. We need a new basis for determining what a monopoly is and what determines the abuse of monopoly power. Similarly for the regulation of publishers.

  6. Facebook was launched in 2004. I should hope that your interesting and insightful blog (to which, incidentally, I was introduced through that platform) has captivated some younger readers, such that today’s post isn’t wholly factually accurate.

  7. I disagree with a key point:

    “And even if regulation was possible, it is almost certain that it cannot be on the same basis of the top-down, command-and-control regulation of political and media activity that we have inherited from previous times.

    For example, social media platforms have millions of publishers and broadcasters, not just a handful.”

    Why not? Social media platforms have millions of publishers, but they fundamentally rely upon centralised selective (automated) boosting of individual publishers/publications. This same boosting (or forced virality) is what causes some of the significant problems on these platforms (e.g. leaving YouTube on autoplay for something that is mildly political and winding up down a QAnon rabbit-hole). That selective boosting could be a focus for regulation. This would likely create market segmentation – if you want to run a social media company and also want to selectively boost/shape newsfeeds, then you have to put up with large degrees of government regulation to ensure fairness/transparency/mitigate social consequences; if you want to have an “organic” platform where there is no selective boosting, then a different set of regulations might apply.

  8. Many people find it annoying when their own behaviour is subject to regulation. They may transgress or comply, but compliance is more commonly grudging than joyful.

    In contrast though, many people are very enthusiastic that others be regulated, particularly others who they feel are untrustworthy.

    Many people find it annoying to comply with health and safety regulations, but are enthusiastic about regulations that prevent aeroplanes falling out of the sky, that prevent harmful pharmaceuticals entering the market, that prevent the deceptive marketing of risky and unsuitable financial products. This is normal in a liberal democracy.

    Media regulation, though, is thought of differently. “Freedom of the Press” is often considered unquestionably good. Freedom of the press is the freedom to propagate falsehoods and to encourage social division and hatred. It is the freedom to steer public opinion towards the interests of proprietors and to damage the prospects of politicians who act against the interests of proprietors. None of this is new: we had a vivid demonstration on 25th October 1924.

    The media favour regulation for many other actors, but not for themselves. Their special pleading is prompted through outlets which they of course themselves control.

    But why, in principle, should the malicious propagation of falsehoods and hatred not be regulated?

  9. Lord McNally’s private Online Harms Reduction Regulator (Report) Bill (supported by our team at Carnegie UK Trust) uses the phrase drawn from CPS guidance ‘threats which impede or prejudice the integrity and probity of the electoral process’. In the ‘risk management of systems’ approach the UK and the EU (in the DSA package) are taking to regulating online harms, the McNally wording would require the platform companies to conduct a risk assessment under regulatory supervision of harm arising from such threats and take proportionate steps to reduve such risk. In this case one would envisage OFCOM working with the Electoral Commission as regulator. As part of a risk-managed process platforms would be expected to have a more effective complaints and appeals process. The UK government in its proposals published on 15 December omits any regulatory ambit over electoral issues on social media. The government limits itself to new digital imprints of online adverts. Lord Puttnam is leading a campaign to include electoral harms in the online harms package – see the report of his Lords Digital Technologies and Democracy committee last year. We’d be happy to talk this through with you.

  10. “‘Quis custodiet ipsos custodes?‘ – who watches the watchmen?” – Public opinion at an instance in time is now the only controller?

  11. So, some Trump supporters are so fervent that they will even take up arms for him. One of the reasons for this degree of loyalty may be that, since the election, Trump’s tweets have consistently demonstrated his absolute belief that he won, and “bigly”! His win has been stolen from him. The genuine and unfaltering nature of his belief communicates itself clearly to his followers.
    What accounts for this? Perhaps the following:
    His way of thinking typically leads him to oversimplify all sorts of issues. And on this critical issue, he can see the popular vote totals – 81 million for Biden, 74 million for him, – and he can also see his total Twitter followers at something like 99 million, while Biden’s followers are far fewer than his 81 million votes. At Trump’s simplified level of thinking, numbers of twitter followers and voters ought to be about the same, or the difference between him and his opponent in both counts should at least be numerically consistent.
    Of course, a bit of analytical thinking tells one that twitter followers will include all sorts of non-U.S voters – under-age, foreigners, commentators, non-supporters wanting to “know the enemy”, etc., etc. Other researchers have found that about 19% of the US adult population (i.e. about 50 million) follow him on Twitter. But Trump considers that analytical thinking is for wimps, minions and losers! Hence, we can blame Twitter for helping to feed this delusion.

  12. Some very sketchy thoughts, prefaced by agreement that censorship has always been exercised by ‘someone’ and in tandem with the holding of power. In the past, however, these have been national matters.

    As the internet is, by its nature, supranational, it would require something at that level to provide any meaningful ‘watch’ of how it is used/misused. Ultimately, that may happen, I suppose, but how we’d get there is another matter. And – yes! – freedom of speech is something that must be cherished and preserved, though (as the saying goes) with freedom comes responsibility.

    People of bad intent have been moving away from Twitter to other platforms like Parler, whose USP is being unrestrained. Would they submit to any form of regulation? – Unlikely. And then there are the encrypted channels, which are even more problematic – yet can also serve a ‘good’ purpose when states go rogue, or when people chose to try to overturn a dictatorship.

    And that’s another point: Who are we to define what is rogue and what is right for different cultures? Are we being presumptuous in trying to impose our Western values on, say, North Korea, if the people of that state themselves are content with how they are ruled?

    A lot to chew over, but just because it’s ‘difficult’ doesn’t mean it should not be confronted.

  13. Rather than conceiving of regulation as a central authority, why not make it simpler and faster to take civil action for damages? Fake news usually tends to damage somebody else’s reputation.

    Platforms being joined in any litigation would help, as they would carry the can for 100% of the damage if the original source folded.

  14. These social media platforms accept posts from anywhere and can be seen everywhere.
    Some nations can attempt to manage use at the network level (country C blocks all access to platform P) but not (so far as I am aware) at the individual user or message level.
    If user- or message-level filtering is to be done, it must, I believe, be implemented by the platform itself.
    If the platform operator should not be left to decide who or what to filter, then (echoing Liz A’s points) :
    * What authorities do they recognize?
    * How does one discover / audit / appeal these decisions?
    * What happens when different authorities have different views?
    I can’t see any basis for a solution, short of some global organization to centralize / internalize the decision-making (anyone for the UN taking on this role? ).
    I can only see this as an escalating, global ethical challenge.

  15. I agree with the thesis that the nature of social media demands a new approach to regulation. However, a feature of regulation is it often evolves to serve incumbent players and stifle further competition in the market. This is due both the the law of unintended consequences and the ability of incumbents to lobby for outcomes that they can live with.

    For example I’ve seen it argued that GDPR actually benefits Facebook etc as they are large enough to invest in the necessary processes to comply with the law (or afford finding ways around compliance).

    My fear for additional regulation of social media providers such as proposed in the online harms legislation is it will lead to further concentration in the market and stifle and meaningful innovation from say local or community led initiatives. In Australia Google and Facebook will find a way to survive the legislation on news aggregators but I can’t see new startups being able to innovate in that environment.

    Ben Thompson has written some interesting things on this at

  16. The social media companies have from their inception thwarted any attempt to impel them to police posted content. Their reasons for this are quite easy to understand: it would both cost them a fortune to operate, and would open them up to legal action for failure to recognise and remove offending posts in a timely manner. American law explicitly absolves them from legal liability for posted content. (Ironically, Trump, for all the wrong reasons, tried hard to get this law repealed).

    I would propose that the law should clearly state that the social media entity is responsible for the accuracy and truthfulness of material posted on its site and subject to the laws of libel. Over time the courts, not a regulator, would be deciding what is or is not accurate and truthful, and what constitutes reasonable efforts by the entity to fulfill this obligation. The regulator should not try to codify every possible infraction but rather, as in a viable constitution, set out general principles for what constitutes acceptable and responsible behaviour.

    The users of these sites have obligations as well and also should be held personally responsible for what they post. Perhaps they should have to pay to post or read comments with a membership subscription. This would greatly reduce the online pooling of ignorance and would destroy these companies business model; but would that be a bad thing?

  17. I’m unsure about how the legal frameworks are set up, but it’s definitely possible to impose some sort of regulation on social media. We’ve seen it in terms of advertising regulations, in terms of hate speech, in terms of pornography, especially that of children, and we’ve seen it most recently as a result of the GDPR where children under 13 are effectively banned from having accounts on most social media, or the “right to be forgotten”. Apart from the GDPR bit, I’m unsure how much of the rest is because of a legal requirement on the platforms and how much is the platforms “just being a good citizen” or due to pressure from law enforcement.

    Admittedly, all of these things have their limits, and a canny user can circumvent the checks and balances, at least long enough to have some sort of impact.

    If nothing else, the laws restricting speech still apply to individuals on social media – DAG knows way more about this than most of us – and I don’t think that social media companies have any qualms in removing content they don’t want to see on their platform.

    So if we’re talking about regulation in this case, I guess we’re talking more about preventing the restriction of content on social media platforms, more than that of restricting it? I guess we’re also talking about the guarantees of neutrality and non-discrimination, but how do you legislate away the echo chamber effect?

    Social media builds communities. And like all communities, it has the common core of values that hold them together, dysfunctional members that attack and modify those common values, and planes of interaction with other communities. Politics speaks to those communities. It follows their trends, and tries to change their point of view. Most recently it’s been discovered that it’s a tool of psychological manipulation on a massive scale, and whilst the platforms certainly have a hand in it, the users themselves are just using these platforms as tools. See all the Bemused Russian Porn bots as an example.

    Arguably the point of law is to create the set of rules that nurture the “right” sort of communities that then come together to build the society we wish to live in. The problem is that the legal redress mechanisms are too slow in an era where tweets can go viral in seconds, and classic censorship moderation tools are too slow for the sheer volume of data going through these platforms. I suspect that If we manage to crack the enforcement part of the equation, that might give us a clue as to how to legislate for it.

  18. Twitter’s explanation for why it banned Donald Trump and the disturbing implications of what they observed.

  19. One regulation, which can be implemented by individual governments and which would have a huge effect on reducing polarisation of views, would be to outlaw the use of personal data for profiling. At a stroke, Facebook would no longer be able to serve [white supremacist] content to [white supremacists], because they would no longer be able to *know* which of their users were [white supremacists].

    (Insert your group of choice in the square brackets; “Aston Villa supporter” works too.)

    It would mean that the rants of an unhinged, would-be politician would remain forever at speaker’s corner, if you will, rather than being promoted in seconds to headlines on a front page. The networks of supporters on the platforms would need to be built slowly and painfully—and on merit—rather than being created in seconds by the algorithms of our new AI overlords.

    It would also mean that young mothers would receive ads for anti-balding medication and grandfathers would get served ads for teenage acne. But that’s a small price to pay to avoid the breakdown of society, I would posit.

    1. Article 22 GDPR already prohibits automated profiling that has a significant effect on people (so, for example, it was invoked against Ofqual when it tried to use an algorithm to determine A-Level results in Summer 2019). Extending this to cover all profiling regardless of what effect it has would in theory be possible but would not be easy to get through the EU legislative process.

      1. “[It] would not be easy to get through the EU legislative process.“

        Why? (Genuine question.) Apart from getting anything through the long process, of course.

        1. Essentially because of lobbying from not only from the tech industry but also from the Member States. It’s the same reason why, for example, the new E-Privacy Directive (replacing the old version from 2002) was supposed to go into effect around the same time as the GDPR (replacing the old Data Protection Directive from 1995), but instead is still caught up in disputes between the European Parliament and the Council (representing the Member States).

  20. David

    Interesting and informative as always.

    You say: “Others would say that silencing an elected politician’s means of communication should not be at the fiat of a private social media platform”

    Has it really changed that much?

    Elected politicians have for a very long time relied upon the decision makers of ‘traditional gateways’ for achieving ‘exposure’, and for delivering their ‘messages.

    Politicians have long understood traditional gateways could built careers and and could destroy them. This has given those gateways varying degrees of influence and power, which most reasonable people would agree was and is, detrimental to a healthily functioning democracy.

    With regards to social media, on one hand we appear to be saying these privately owned vehicles must be held responsible for the content to which they give a platform, while simultaneously condemning them for taking action to remove those most responsible for breaching their stated rules?

    In the end, if I incited a crowd to violence, no one would notice or care. But if an elected official does so, he or she does so in the full knowledge they will be listened to and likely acted upon.

    Therefore, like freedom of speech in general, with that ‘power’ comes a great responsibility to use it wisely and responsibly.

    ‘One person’s freedom of speech vs another’s freedom to not to have to listen’.

    ‘When in my house, obey my rules.’

    For now, if traditional checks and balances cannot prevent the onwards march of political extremes of whatever nature, perhaps we should be grateful there are still some means available to us?

    It’s not like Twitter is the only means of communication available to Donald Trump & his ilk?


    PS; I could not agree more re our electoral system & electoral law being unfit for purpose.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.