With (Far) Fewer Followers, Mastodon Generates More Engagement Than Twitter. Why?

A recurring comment from people who post stuff on the internet and give Mastodon/fediverse a try is the (high) engagement and reach they get there.

Even with much smaller audiences than at Twitter, posts usually get more likes, “boosts” (RTs), and clicks. How can that be?

Wall Street Journal tech columnist Christopher Mims has a good hypothesis:

To me, the answer is pretty simple: Twitter tries to aggregate as much attention as possible around stuff that goes mega-viral.

There are only so many minutes in the day. So for stuff to “blow up big” necessitates that most of the rest of the posts from people we might actually want to hear from must go unseen.

The logic of the fediverse is to deliver the content that someone asked to receive, as in following other people, without an opaque filter (the “algorithm”) in between.

This logic pulverises the attention distribution — less viral content that gets on TV and even your grandma knows about it, more small, organic content spreading across the web in niches. More diversity, more inclusion, more chances for more people to be heard.

Discuss @ Hacker News.


A Brief Introduction to Nostr

“A milestone for open protocols…” This is how Jack Dorsey, co-founder and former CEO of Twitter, announced the arrival of the Damus app, a Nostr protocol client, in the App Store/iOS.

Nostr has generated buzz in developer groups, bitcoin addicts, and a crowd very suspicious of their own shadows. The reason: Nostr brings an alternative to commercial social media that, in the words of its creators, would be “truly censorship-resistant.”

But Nostr isn’t a new social network. Nostr is a protocol, something more like the web (HTTP/S) and email (IMAP/SMTP) than Twitter or Instagram.

In practice, Nostr is a foundation on which developers build applications. Its main differentiator is the authentication system, based on cryptographic keys. Experts praise the simplicity of the protocol, which in theory facilitates the development of apps.

(Hold my hand and come with me because now things get a little weirder).

To create a profile/an identity in Nostr, you need to enter or create a pair of keys:

Both keys are jumbles of letters and numbers. It is like that code from Nintendo’s virtual network, only (much) worse.

Want to follow me on Nostr? Search for npub1wa406lmdvfctavg3qgwauwrg228ylvskcyj0prfh48e0xwv6aensyv5n87.

Yeah, I know.

Three screenshots of different screens of Damus app for iOS.

With your private key, you can log into any app and feel at home with your content and connections.

Another difference of Nostr to conventional social networks is that the structure is based on “relays,” as if they were “nodes” in a peer-to-peer network.

“[Relays] allow Nostr clients to send them messages, and they may (or may not) store those messages and broadcast those messages to all other connected clients,” says Nostr.how, a good interactive Nostr tutorial.

In Nostr.watch you can see the active relays in real time and details such as latency and country of origin. At the time of writing this there are 282 active relays worldwide.

I created my profile in the Damus app and then logged into Iris, a web application (see my profile). And… it works!

When I logged into Iris, I could see the system communicating with the relays — my posts didn’t appear immediately. In one corner of the screen of both apps, Iris and Damus, you can see the number of connected relays.

A screenshot of Iris, a web app for Nostr.

Both apps are very reminiscent of Twitter, with a timeline, posting box, and direct messages, which here are encrypted end-to-end.

It doesn’t have to be this way, however. Being a protocol, Nostr allows the creation of very different apps on top of it.

On this site and in this list are different apps based on the protocol, such as Jester, a chess game, and Alby, a bitcoin wallet.

Nostr is a kind of “end-of-the-world protocol,” designed for extreme situations where absolute mistrust reigns between those involved — users, relays, and app providers.

If in Twitter you have to trust Twitter, and in Mastodon, the administrator of your instance/server, none of this is necessary in Nostr. The protocol is truly decentralized and its using, although connected, is independent of other parties.

Even the identity verification system, called NIP-05, is independent of external validations. It is done based on DNS records.

To the creators and promoters of Nostr, the great appeal is an almost full resistance to “censorship”. We know where this conversation is taking us, and… it’s not a good place.

Despite the support of influential people in the industry, like Jack Dorsey, Nostr sounds at the moment like something too complex for mass adoption — far more complex than Mastodon, for example — and without much appeal to normal people who just want to have a laugh on Twitter and look at pictures of celebrities, food, and friends on Instagram.

Much of the complexity lies in the pair of keys thing. Which is not new; This is a recurrent thing among those who work with servers and development. Although they have many advantages, it’s risky and difficult to understand for those who are not in the business.

It is no wonder that commercial apps often use abstractions, such as login and password, to make it easier for more people to use them. If people can’t keep passwords, imagine a kilometer-long pair of cryptographic keys?

My bet? It might work, but it will be niche, like other creative little protocols that pop up from time to time, such as Gemini, an alternative to the web that looks like the web of the 1990s.


When Social Media Moderation Becomes Our Responsibility

Being in the fediverse today is a familiar and weird experience at the same time.

On the surface, it’s all very similar to Twitter and other social networks. Yes, the system for following/finding someone is kind of clunky and there are unique features and conventions there, like the “content warning” and the emphasis on image descriptions. These details are quickly learned, though.

Things can get (and do) more complex.

Time and again, differences emerge between instances (themselves, a difficult concept to explain) and groups of users, such as long-time users and newcomers coming from Twitter. And although the power dynamics are good, better than those of commercial alternatives — decentralized, with distributed power — this doesn’t mean that the architecture is ready or does not need adjustments, improvements.

One obvious, long due is to give more transparency to instance blocking.

Instances, or communities, have the power to block each other. And they, or most of them, make use of this power. Which is great: the social network Gab, that one for white supremacists? It’s Mastodon under the hood, which doesn’t mean much because almost all other instances preventively block it.

So far, so good. The problem is that reporting these blocks is right now at the discretion of the instance administrator. There is no way for anyone to know about the blocks done on their behalf, nor about the blocks of other instances to theirs.

And, at the very least, Mastodon should notify members involved in cross-instance lockouts. This helps users to be aware of the administration’s actions and make informed decisions to stay or migrate instances. (Migration is a smooth process and you don’t lose followers by doing it.)

A practical example. In late 2022, Brazilian instances blocked another, called Ursal. Some administrators, such as Donte’s and Bantu’s (both in Portuguese), published posts justifying the decision. We cannot tolerate certain abuses, and we are all grown ups with limited time. The omission of the administration of one instance requires the administration of other instances to moderate other members about basic civility practices. This isn’t sustainable, and it’s also pretty lame.

Nothing wrong with blocking instances with bad admins. The problem is that I only found out about the blocking when it was already in effect. The account for my Portuguese-written blog was at Donte and, as much as I now agree with the admin’s decision and praise his hard work, this decision affected the relationship I had with people at Ursal, who, like me, were not aware of the troubles of Ursal admin. We were taken by surprise with the breakdown.

When I migrated my blog’s account to its own instance, re-establishing contact with Ursal, someone from Donte questioned me:

Is Ursal blocked on Donte? I had followers and followed people from there. Does this mean that our communication is no longer possible?

Yes, it means exactly that.

A global and automatic notification system would avoid this kind of unpleasant surprise. There’s nothing like that on Mastodon roadmap, although a (great) feature suggestion was made on GitHub two years ago.

Nevertheless, despite everything, the decentralized fediverse/Mastodon model has been better than Twitter’s centralized one. If on Elon Musk’s network the abuses run wild and are only stopped at the threshold of absurdity (if ever), here on the other side, with ordinary people in charge, dialoguing, playing politics, making mistakes and learning, and trying to get it right, the environment in general is more harmonious, more pleasant.

Rushes like this involving Ursal are inevitable. We are, after all, people trying to find a common denominator to live together in an environment where communication is limited, one to many, unnatural. With patience and good will, however, we will go far.


Was this Brazilian major app bypassing Apple's location privacy on iOS?

One of the biggest Brazilian apps/startups, iFood, was peeking at iOS users location when it should’ve not.

A reader of Manual do Usuário (my Portuguese-written blog) noticed the glitch/bug while using iOS 16.2.

iFood, Brazilian largest food delivering app evaluated at USD 5.4 billion, was accessing his location when not open/in use, bypassing an iOS setting that restrict an app’s access to certain phone’s features. Even when the reader completely denied location access to it, iFood’s app continued to access his phone’s location.

Two screenshots showing iFood accessing iPhone location even when denied to do so.

We got intrigued: how was iFood getting away with this?

An educated guess was revealed by iOS 16.3 release notes, launched on January 23th. Apple mentions a security issue in Maps in that “an app may be able to bypass Privacy preferences”. It’s CVE-2023-23503, submitted by an anonymous researcher and, so far, “reserved” in CVE’s system — which means details are pending to be published.

The reader who noticed iFood’s misbehavior said that afterwards, he reseted his iPhone and that apparently solved the issue. He promptly updated to iOS 16.3 as soon it was released. So far, he haven’t notice anything unusual.

I contacted iFood’s press team to get a word about this issue. They received my request, asked for more details, but haven’t provided a statement so far. When they reply, I’ll update this post.

Update (February 1st, 17h35): iFood just sent a statement. Here it goes (my translation):

iFood reinforces that data security is a priority in its business and in the relationship with consumers, deliverers, and restaurants. The data collected is used only for the purposes set out in our Privacy Statement.

In this case, after careful analysis by the technology team, no code was identified in the iFood application that allows access to the user’s location without authorization, but even so, the company remains available to clarify any questions on the subject or any alleged failure, in order to contribute to bringing more security to the platform.

Present in over 1,700 cities in Brazil and a reference in online delivery, iFood constantly invests in security, technology and monitoring to identify and correct possible flaws and continuous improvement of the application.


My Job at Risk, Thanks to ChatGPT

A few centuries later, I feel today what British craftsmen and small producers must have felt when they saw the first machines arrive and the first factories open during the Industrial Revolution.

A new technology, generative artificial intelligences (AI), poses a threat to intellectual jobs that until recently — about five years ago — seemed safe in the face of overwhelming labor automation.

Not anymore. AIs such as ChatGPT, the LLM (large language model) type, are capable of generating coherent original texts from short prompts written by humans.

Like all revolutionary technology, it seems like magic. And it is no coincidence that I return to the same topic in less than two months. ChatGPT was launched five days after I published that first column.

Instead of spending a few hours on research, writing and editing to publish this text, for example, I could have asked ChatGPT to write something about the threat of AIs to those in the writing business. Very meta — and tired; I will spare you that.

The result would not be the same, but it would probably be “good enough”. We know, not from today, that “good enough” is often… good enough for a lot of people. And being cheaper and faster to produce, it is hard to resist.

Today, generative AIs are still a kind of curiosity, a topic for dazzled texts on LinkedIn, creative tests, experimental solutions. The potential, however, is there, wide open.

Microsoft, one of the main backers of OpenAI, the company behind the most advanced AIs (besides ChatGPT, it also owns DALL-E 2, GPT-3, and Codex), announced last week that it’s offering OpenAI services in its cloud solution and already offers features based on them in some commercial products, such as Github Copilot and Microsoft Designer. Rumors suggest that ChatGPT will soon arrive in Bing and productivity applications (Word, Excel, PowerPoint, etc.).

In journalism and writing for the web, the potential is explosive.

For years, some newsrooms, such as the Associated Press, have been using robots to produce simple texts, such as news on company balance sheets and sports results.

With the new generative AIs, this practice changes levels. Until now, the texts written by robots were sort of a logical, understandable “script”: take this data and put it into a template. With ChatGPT, however, the robot seems to gain imagination and the logic gets lost in complex and opaque algorithms.

The result is also of a different magnitude. ChatGPT creates arguments, detects consensus, discovers controversies. Although lacking awareness, it simulates one. It is “good enough.”

CNet, a US publication covering technology, began testing such an AI last year in the worst possible way: with little transparency.

Someone found out, and under scrutiny, basic errors were discovered in the nearly 80 published robotic texts. A widespread failure: of the AI and of the human being (supposedly) responsible for checking and editing the artificial text.

These are mistakes that perhaps the next version of GPT will not make. The pace is breakneck.

In the journalism domain generative AIs may not be ready for production, but in other less demanding ones they already do very well, thank you: quick responses to emails, answers to search engine queries, social network posts, top-of-funnel content for institutional blogs.

You don’t have to do much research to come across dozens of startups trying to get ahead in this new gold rush — trying to sell shovels to the prospectors who, in the end, will be using them to dig the graves of their own jobs and those of others.

When these AIs are good enough, job openings are reduced and the assignments of those left over change. From writers and editors, for example, we all become “robot babysitters”, correcting blatant (for humans) errors that may slip by in the artificial text and that we, of course, manage to catch. (Because if there is one thing we are good at, it is failing; even in this the AI reproduces us.)

Soon, my routine will gain one more demand: to prove myself flesh and blood in a purely digital environment, full of “rivals” who do not sweat, get tired, get sick, and have no mood swings. We are playing in the opponent’s camp. It is an inglorious struggle.

Unlike the 18th century British Luddites, I don’t even have a machine to wreck. The generative AIs that threaten my craft exist in the cloud, that ethereal concept, mere euphemism for “big computers in controlled warehouses far away from us”. No gunfights against robots that look like Arnold Schwarzenegger, forget about it. The machine revolution will be discreet.

And unlike what the best utopias predicted, we won’t even be able to dedicate ourselves to the arts, because generative AIs also already produce illustrations, paintings and photos. They even win contests.

Perhaps our fate, the fate of humanity, is that we will all become Simpsons grandpas screaming against the cloud. What a pathetic end.


A Quick Look at Ivory, Tapbot's Mastodon App

From the same developers of Tweetbot, here comes Ivory: a marvelous Mastodon app for iOS.

Ivory is still in alpha, i.e., in testing and (supposedly) with some rough edges. Last Saturday (14), I got access to this test version, which I now present here.

The good news is that Ivory is very reminiscent of Tweetbot and, at the same time, assimilates well the peculiarities of Mastodon. It couldn’t be different: its base is the same as Tweetbot’s and Tapbots’ craftmanship in making great apps is well known.

When you open the app, you see the main timeline. Under the posts you see the typical action buttons — reply, retweet (here called “boost”), and favorite —, plus two “internal” buttons — the share sheet and a configuration button that brings things like bookmarks, translation, and post and profile details.

Gestures, inherited from Tweetbot, are present:

Three screenshots from Ivory showing home screen/timeline features.
Screenshots: Ivory/Rodrigo Ghedin.

One cool detail about Ivory is that it features Mastodon’s timelines — something that, weirdly, the official app has chosen to leave out. Unlike Twitter, where there is only one, Mastodon offers three:

At the bottom of the window is a main menu with five icons, two of them fixed (timeline and replies) and three customizable. To change one of these three, just hold your finger on it.

The areas available there are quite varied. The highlights are the bookmarks (a kind of “private favorite”, to save posts without notifying/alerting their authors) and the statistics, which brings a series of data about your behavior while using Mastodon.

Three screenshots from Ivory showing details of the main menu (bottom of the screen) and the timeline selection (Home, Local, and Federated).
Screenshots: Ivory/Rodrigo Ghedin.

The post button is floating. You can drag it to any of the four corners of the screen.

When you tap it, it displays the composing screen. Again, all the basic options are there, even the four levels of visibility that Mastodon offers.

Not everything is perfect and up to date so far. Ivory does not yet include editing posts, a feature that has existed in Mastodon since March 2022, nor the creation of posts with content warning.

Two screenshots from Ivory showing the screen for creating/editing new posts.
Screenshots: Ivory/Rodrigo Ghedin.

Other features are absent, such as support for instance-specific emojis. All of these are on the developers’ radar. On Ivory’s website there is a list of these pending issues (“Current Roadmap”) which they promise will be fixed in the near future.

In the settings, you can add other accounts, change the behavior of the application and how it handles Mastodon features (for example, open all content warnings by default) and even customize details of the experience, such as the sounds (very tasteful, but I choose to disable them) and the app icon.

Three screenshots from Ivory showing the app settings, including the app icon selection.
Screenshots: Ivory/Rodrigo Ghedin.

Ivory is still in the “alpha” stage, but it doesn’t look like it: in two days of use, I didn’t run into any errors and the app worked very well.

No word yet on when Ivory will be released. The recent breaking of Twitter’s API for third-party apps, including Tweetbot, should speed up the release of Ivory.

On Mastodon, Paul Haddad, one of the Tapbots developers, announced that Ivory’s development has gone into “hyper mode” in order to resolve the 3-4 mandatory fixes before submission to the App Store. On Saturday (14), Ivory’s profile reported that an early access version is expected to be released by the end of January.

Ivory will be paid, probably by subscription — just like Tweetbot. If the same pricing as for the Twitter app is used, we are talking about USD 0.99 per month or USD 5.99 per year.

A macOS version of Ivory is also being developed. It is not yet available or in testing and there is no release date planned.

« Previous 9 of 14 Next »