[Date Prev][Date Next][Subject Prev][Subject Next][Date Index][Subject Index]

RE: About the reverse-engineering thread



I was contemplating saying something similar--at least the part about accepting AI as a useful tool
for humans who find that it helps them do something they couldn't otherwise do. I recently watched a
video from the UK Smalltalk community of a presentation from a former university dean who had
undergone treatment for brain cancer. Out of the experience he was healed and otherwise typical,
except he completely lost his ability to read. He demonstrated how he used AI in surprisingly
effective ways to overcome this disability and maintain his ability to communicate, write code, etc.

But what made his ability to relate so effective is how he warmly introduced his situation and
explained to his fellow humans how he was using the technology to relate to them. He adeptly did so
in a way that evoked no pity or drama over the situation--it was just warm, human, and matter of
fact.

As our use of AI evolves, I think it is important to approach other human beings as a human being
first and foremost. It should be possible for a human to declare their role and motivation--even
while preserving anonymity (although I admit it is hard not to regard anonymity with suspicion).
After that, they are welcome to use whatever tools are at their disposal to contribute.

We have enough conflict and suspicion in the world. I agree with the sentiment of not clinging to
that in small groups. But I also think that society has a lot of wrestling to do to establish the
moral, ethical, and humanly acceptable use of AI. After reading about Moltbot and some of
communications sent by agents to computing luminaries like Rob Pike, the message from OP sounded
eerily similar. If Moltbot did another experiment of unleashing a bunch of agents to "go forth
and help underfunded open source and enthusiast communities", who know what might happen? What
I do know is if such a near-science fiction scenario played out and looked something like what we
experienced, we should be shocked and humbled that it would stumble on and decide to help--of all
things--XyWrite! �

- Kurt

-----Original Message-----
From: xywrite-bounce@xxxxxxxxxxxxx <xywrite-bounce@xxxxxxxxxxxxx> On Behalf Of em36
Sent: Friday, February 27, 2026 9:44 AM
To: xywrite@xxxxxxxxxxxxx
Subject: About the reverse-engineering thread

For better or worse, this thread illustrates something I've noticed
elsewhere:

Whenever someone spends dozens or hundreds of hours working on something that will benefit many
other people, a significant number of people will complain that there is something dishonest or
disgraceful about it.

I've seen many instances of this, and that is perhaps the case here. I don't care whether an email
message seems to have been cleaned up or written by AI. The author may not have the linguistic
skills needed to write the message on their own. What matters is the content, and in this case, it
seems very likely that the content is both real and valuable. 
It's impossible (for me at least) to imagine any way it could not be.

The culture of mistrust that has manifested itself on this list may now have destroyed the one real
chance we have of bringing XyWrite into the twenty-first century. I very much hope that the person
who is working on this will have the grace to ignore the mistrust and continue working. 
and I would certainly hope to hear more about this project.

My guess is that I'm not alone in thinking this. Anyone who shares my view - and I hope that the
original poster will consider joining in this
- is welcome to get in touch with me at edward-dot-mendelson-at-columbia-dot-edu and I'll be glad to
share the information with others who get in touch in the same way. But I hope someone else has
already taken steps in the right direction, and my offer here is superfluous.