
Riot.im: The Big 1.0 - OberstKrueger
https://medium.com/@RiotChat/the-big-1-0-68fa7c6050be
======
fredcy
I've been using Riot daily for several years, and this 1.0 version seems to
work really well. Like any major UI change it's going to take me a while to
get used to the slightly different mechanics. Overall it looks more modern
than the prior versions.

------
stevenicr
Looks like lots of nice improvements! Glad to see this!

Once there is a way for me to have assigned moderators on my instance able to
click / tap to see the ip addy, user agent, associated instance, of users in
the room, with the ability to ban them and add a note in an admin/moderator
panel..

with the added ability to open list of bans, reasons, and add or remove
instances, ip addys, subnets, cidrs, hostnames..

Then I will be bringing a few hundred users to use this instead or wisechat
and realchat.

Looking forward to using this one day!

~~~
Arathorn
from a privacy perspective we really really don't want to leak client IPs and
UAs to other users, even if they're moderators. in a disaster the server admin
can get at this, however, and there is a /whois API (similar to IRC's) which
server admins can use, but none of the clients hook up UI for it.

~~~
stevenicr
I love the privacy thing. With some chat clients I stopped using them, or
turned off features to protect ips being leaked from each room participant to
all the others in the room. It's important for safety and security.

I believe this is what would cause us to force the use of a stun/turn server
if we switched to web sockets. I have seen nefarious users pull and use ip
info from chat rooms in many ways for blackmail and worse.

I've also watched some of these wicked people use other tricks like share a
picture in the chat room - then check their server logs to see which ips
pulled the jpg request - then use that info to hack everyone's home routers,
pull their pics, put them up..

So yes I get the importance.

However I can assure you that times are coming where people are going to need
better blocking tools with these chat systems.

The main reason I have not started testing with the riot / matrix with my
groups of users is becuase I would feel bad for those other people who run
instances - as they would be flooded with a few groups of evil wicked trolls -
and since there appears to be little moderation ability - it would quickly
ruin the chats of all who allowed any user sign ups.

They are coming, it's just that matrix has not been pointed out as a ripe
target yet. They will spam, for fun and profit. They will send bots, they will
send dozens of humans to do it. Then they will post stuff to make everyone ill
- nazi stuff, child prn stuff, all the things you don't want in your system
and your other users will leave if they see it.

Whatever you do to ban them, they will return. If it's possible to come back
with a new ip and a new email addy - they will be back with six identities
before you finish banning the last one. They will register names that look
like other's in your system. Heaven forbid you have not blocked blank space
ascii/unicode characters..

When these groups of people find that it frustrates you to have your system
turned into 4chan, then the real fun begins. Even if you can see their ips it
won't matter, they have access to thousands of subnets.

Anyhow, I hope more moderation tools come to riot / matrix / etc - I would be
fine with giving my appointed moderators the ability to click to block subnet
and have the server do it even if the moderators could not actually see the ip
of the individual.

The blocking is most important. Some of my mods look at ips to try to
determine cases where someone is suspected of being an imposter and things
like that - but that's not as needed. All of my moderators have made
agreements with have about ips and personal info during the training process
and get how important we take privacy.

I would think this is going to be extremely important with the federated
options and such.

~~~
Arathorn
totally agreed on being able to block abusive users. but being able to spy on
their IPs does not help much in doing so.

~~~
stevenicr
Anything we can do to make it easier to block bad users I am all in.

Perhaps we could have options to let moderators see hostnames or something
instead? maybe even some kind of hashed code..

thing is we currently use mods seeing ips to determine imposters, and to
determine when someone comes into the chat room with multiple usernames at the
same time. This is not 100% solution sure, and it's legacy tools we are
talking about - but it has been immensely helpful many times.

Some of the worst trolls will come in with a dozen usernames at the same time.
When you kick one out of the room - the other 11 can still troll.

Being able to see an ip of different users can give clues as to whether or not
other suspicious behavior is indicating more nefarious stuff.

So far being able to ban via ip subnets and cidr has been the most effective
method for stemming most of the spam bots and abusive trolls we have
encountered.

We do get some smart ones that log in with a dozen different ips from a vpn
service - and that does make things much more difficult - it that does come at
a cost to the troll as well - so it's worth doing.

I think the instance / server admin may need to view ips to unban if it's an
issue - but maybe the moderators just need to see a numeric hashed
representation so we can see that sally45 and billy 29 are not in the room
from the same connection? Or hostnames or some other thing.

I am open to all ideas on this front. I have spent hundreds of hours dealing
with banning chat room users and unbanning people on occasion.

So far the rudimentary moderation tools of the old realchat from 15 years ago
are better than the modern chat systems we have tried out (rocket chat, riot,
etc) - hopefully this will change soon.

------
qqii
For authenticating multiple users, I wonder if you can do something GPG like
where you generate a web of trust?

~~~
Arathorn
currently we're aiming for a web of trust between your own devices, and then
transitively trusting other's devices once you've verified that user.

however, a GPG-style web of trust which lets transitive trust span over users
is a bit questionable: all it needs is one party to be sloppy with keysigning
and the whole things falls apart. It also leaks metadata in terms of who knows
who like a sieve. So we're going to see how far we can get without it :)

~~~
rwmj
I think I mentioned this before, but I think you should allow for the case
where all participants in the virtual room physically meet. Think conference,
or a meeting on business premises. They should be able to swap keys with each
other efficiently, perhaps using NFC, and without having to have O(n^2)
interactions. Yes it may not be super-secure but you can assume the security
is already taken care of by the physical security of the meeting.

~~~
Arathorn
yeah, this is a good example where a temporary(?) web-of-trust could work. in
retrospect, i think the cross-signing proposal
([https://github.com/uhoreg/matrix-doc/blob/cross-
signing2/pro...](https://github.com/uhoreg/matrix-doc/blob/cross-
signing2/proposals/1756-cross-signing.md)) has some thoughts about how
transitive trust between users could work, but i'm failing to find it now.

I've raised a bug to track: [https://github.com/matrix-org/matrix-
doc/issues/1886](https://github.com/matrix-org/matrix-doc/issues/1886) \-
thanks!

------
secfirstmd
Congrats to the Riot.im folks. The UI/UX was one of the only things that we
found was holding a number of organisations we work with back from using it.
Combined with Matrix, it's a wonderful tool.

~~~
Arathorn
thanks :)

------
pndy
You can't see the content of the room without creating an account now?

~~~
Arathorn
sure you can.
[https://riot.im/app/#/room/#matrix:matrix.org](https://riot.im/app/#/room/#matrix:matrix.org)
in an incognito tab does what it always did and lets you preview the room
without signing up?

