
Chromium's Impact on Root DNS Traffic - jakob223
https://blog.apnic.net/2020/08/21/chromiums-impact-on-root-dns-traffic/
======
xg15
Couldn't the traffic be somewhat reduced by changing the time and order of
operations?

Currently, Chrome does the following:

(1) on each network change, send three DNS requests with random hostnames.

(1a) If at least two of the queries resolve to the same IP, store the IP as
the "fake redirect address".

(2) on a user search, query the first search term as DNS.

(2a) If the query result is NXDOMAIN or matches the fake redirect address, do
nothing. Otherwise, show the "local domain" hint.

Instead, it could do:

(1) on a user search, query the first search term as DNS.

(1a) if the query comes back with NXDOMAIN, don't show the hint and stop.
We're done.

(2) otherwise, make two more DNS queries with random domain names to check for
fake redirects.

(2a) if the two queries resolve to the same IP as the first one, we have a
fake redirect. Don't do anything. Otherwise, show the "local domain" hint.

Results of step (2) could be cached until a network change.

This would only require 2 instead of 3 probe queries and only if the user
actually searched for something _and_ if the search term actually caused a DNS
match (fake or genuine).

~~~
skissane
> (1a) If at least two of the queries resolve to the same IP, store the IP as
> the "fake redirect address".

From reading the source, it actually does a HTTP HEAD chasing redirects, and
records the origin of the final page, and uses that as the redirect address.
So even if two hostnames yield different IPs, if they end up redirecting to
same hostname, it will be detected

> (2a) if the two queries resolve to the same IP as the first one, we have a
> fake redirect. Don't do anything. Otherwise, show the "local domain" hint.

What if an ISP uses multiple IPs in the fake redirect, and alternates over
those IPs in each successive response?

~~~
xg15
> _What if an ISP uses multiple IPs in the fake redirect, and alternates over
> those IPs in each successive response?_

Good point. I was wondering how they'd deal with that in the actual
implementation.

I think you got the answer though: They match HTTP origins instead of IP
addresses - so I imagine, you could do the same in step 2: Do a HTTP HEAD
query to the search word and two additional ones to random hostnames,
following redirects. If the final origins are the same, there is fakery going
on.

A problem with this could be unexpected HEAD requests to actual internal
hosts: There is no guarantee an internal host that was never meant to receive
HEAD requests would react gracefully or in any way predictable to one.

I'm not sure how they solve this currently. Maybe this could at least be
mitigated by only sending the HEAD request to the search word host if there is
reasonable suspicion requests are being redirected - e.g. only if the two
random hosts resolved and were both redirected to the same origin.

Finally, you could cut all of this short by also connecting to (search
word):443 and trying to initiate a TLS session handshake. If the host answers,
you know it's probably a genuine internal host that talks HTTPS and you don't
need to do any additional probes. (And you can also abort the handshake and
don't ever need to send an actual HTTP request to the host)

~~~
skissane
> A problem with this could be unexpected HEAD requests to actual internal
> hosts: There is no guarantee an internal host that was never meant to
> receive HEAD requests would react gracefully or in any way predictable to
> one. I'm not sure how they solve this currently

A perfectly legitimate answer is to not even try. Per the HTTP spec, sending
HEAD is supposed to be harmless. If it causes harm, that’s a violation of the
spec. I’m sure some legacy server out there segfaults on HEAD, but that’s not
a browser vendor’s problem, and it isn’t up to the browser vendor to do
anything to try to prevent it. Browsers (not just Chrome) send HTTP HEAD in
other scenarios as well. And I think this problem is relatively rare in
practice, since one hears few reports of it actually occurring

~~~
xg15
I disagree. In this case, it would be Google who is changing the status quo.

I think it's legitimate for a fully internal server on an internal network
that only handles internal clients to only implement the parts of the HTTP
spec that are relevant to the exchange.

If Google worms its way into that network and starts to talk to random
servers, they'd be at fault for causing problems.

I think this holds particularly strong for this case as the requests are not
tied to an obvious user action: If I typed "[http://dumbo/"](http://dumbo/")
into the address bar and the browser's GET request broke the server, I'd be
more inclined to view the server's team at fault than if I just searched for
"dumbo" and found out the browser broke a server that I didn't even know
existed.

Independent of that, it's of course good advice to always build your services
as robust as possible - internal or not - and to follow the spec wherever
you're able to.

~~~
skissane
> I disagree. In this case, it would be Google who is changing the status quo.

This code has been in Chrome since 2010. When you've been doing something for
the last ten years, you aren't changing the status quo, now you are part of
the status quo.

> I think it's legitimate for a fully internal server on an internal network
> that only handles internal clients to only implement the parts of the HTTP
> spec that are relevant to the exchange.

So, there are two ways of "not implementing HEAD": (1) ways that don't harm
the availability of the service or other connections to it (return a HTTP
error, abruptly close the connection, etc) (2) ways which do (e.g. crash the
whole service upon HTTP HEAD on a single connection)

If a service isn't implementing HTTP HEAD in way (1), then Chrome isn't going
to hurt it. If a service isn't implementing HTTP HEAD in way (2), then it is
buggy, poorly written, and also insecure (HTTP HEAD becomes a denial of
service attack), and that's not Google's problem, that's the problem of
whoever maintains that service

In practice, few services which don't implement HEAD in way (2) are even going
to exist, because browsers (both Chrome and others) regularly send HTTP HEAD
in other circumstances as well (e.g. CORS checks). If HTTP HEAD makes your
service crash, your service is going to be crashing a lot even if Google had
never implemented this particular feature

------
jve
The first question I asked to myself: Is there a way to disable it? Networks
i'm attached to, don't do any hijacking.

And yes, luckily there is a policy to disable it:
[https://cloud.google.com/docs/chrome-
enterprise/policies/?po...](https://cloud.google.com/docs/chrome-
enterprise/policies/?policy=DNSInterceptionChecksEnabled)

Registry key: Software\Policies\Google\Chrome\DNSInterceptionChecksEnabled

PowerShell: Set-ItemProperty HKLM:\SOFTWARE\Policies\Google\Chrome -Name
DNSInterceptionChecksEnabled -Value 0 -Type DWord

If you are managing Chrome via GPO, you should do it via GPO. Templates can be
downloaded here:
[https://chromeenterprise.google/browser/download/](https://chromeenterprise.google/browser/download/)

~~~
RedShift1
I wouldn't apply this policy to road warriors even if they spend most of the
time in a location you have under control.

------
peteretep
Wait, so Chrome leaks the first word of my searches to my ISP? That doesn’t
sound like something I want to happen

~~~
boring_twenties
Your local nameserver should be configured to not forward unqualified names
upstream.

~~~
JdeBP
There is no such thing as unqualified names at this level. All domain names
are fully qualified, and comprise one or more labels.

~~~
boring_twenties
I'm referring not to a recursive nameserver, but to a caching one that simply
forwards queries to an upstream resolver. Like the one in every consumer
router. Usually that's dnsmasq, with this option:

    
    
           -D, --domain-needed
                  Tells dnsmasq to never forward A or AAAA queries for plain names,  without  dots
                  or  domain  parts,  to  upstream  nameservers.  If  the  name  is not known from
                  /etc/hosts or DHCP then a "not found" answer is returned.

~~~
JdeBP
That is not an unqualified domain name, and notice that it does not say that
it is.

* [http://jdebp.uk./FGA/dns-name-qualification.html](http://jdebp.uk./FGA/dns-name-qualification.html)

And the words that you are looking for are "resolving" and "forwarding". A
proxy DNS server either does query resolution itself or forwards to another
proxy DNS server that does. Both sorts can cache, so whether something is a
caching server is not the distinction. dnsmasq is choosing whether to forward
the query or to do query resolution itself (using a local data source)
according to the number of labels in the domain name. As I said, at this level
the idea of domain name qualification does not apply.

You are also mis-using "resolver", incidentally. The actual meaning of
"resolver" per RFC 1034 is not what people sometimes think it to be. Avoid
using "resolver". The mis-use creates confusion.

See
[https://news.ycombinator.com/item?id=15232208](https://news.ycombinator.com/item?id=15232208)
.

------
0x0
Verisign has nobody but themselves to blame, for "inventing" this with its
SiteFinder fiasco in 2003.

~~~
aaronAgain
Right! Around 2010, when this feature was implemented in Chrome, hijacking was
a business model that was discussed in regular meetings. I recall one hijacker
trying to sell themselves to the company that was 'complaining' about the
hijacks.

"Buy us out and we'll stop, and you can use the tech on your customers?!?"

One of the boldest business proposals I've been party to. After a few deep
breaths and some laughter, the offer was not taken. But that wasn't a one-off
event. Spent a lot of time in early 2010's directly trying to protect
customers from this stuff. Still do, but it's getting much harder with TLS-
everywhere, HSTS, DOH, and many other things. Not impossible though, we can
never let up on the pressure to keep the ROI too low for hijacking. The
various network operators and ISPs that let these companies put racks in their
data-centers to inspect user traffic should be
<<insert_your_own_horrible_idea_here>>.

------
kevingadd
For me, the kicker: if I'm reading it correctly, over 40% of DNS traffic to
the root server they examined is just diagnostic probes from Google Chrome
being used to spot malicious DNS servers.

~~~
gbil
We got hit by this issue in March when our remotr users increased 5+ times and
the DNS traffic going through our VPNs was causing a headache to our DNS
servers. We pinpointed this to tis Chrome functionality, which includes also
other chromium based browers like new Edge, and we had to deploy a relevant
GPO to disable this functionality. Some background, I'm talking about ~200+k
remote users. Also while in the office the load is distributed in tenths of
DNS servers, when on VPN only a fraction of those are used. Furthermore if I
remember correctly this "feature" in chrome was enabled in a version which was
distributed to our clients maybe a month before the lockdowns so there was
little time to see the effect while clients were still in the office

~~~
jiggawatts
The last time I saw DNS throughput or performance issues was around 2003 on a
network with 200K desktops and servers. That was 17 years ago, and they don't
have a problem any more, despite growing in footprint to nearly half a million
client machines.

I struggle to understand how DNS can possibly be a performance issue in 2020.
In most corporate environments, the "working set" of a typical DNS servers
will fit in the L3 cache of the CPU, or even the L2 cache.

The amount of network traffic involved is similarly miniscule. If all 200K of
your client machines sent 100 requests per second, each 100 bytes in size, all
of those to just _one server_ , that adds up to a paltry 2 Gbps.

If your DNS servers are struggling with that, get better servers. Or networks.
Or IT engineers.

------
russellbeattie
I'm sure anyone here who has set up a PiHole ad-blocking DNS server at home
has run into these random domain requests and wondered what was going on. At
first I thought one of my devices had a virus on it or something until I did a
few searches and discovered it was Chrome being ludicrous. (Next topic:
Getting Chrome to actually use the DNS provider that you specify and nothing
else...)

~~~
merlinscholz
I recently just blocked port 53 in my firewall completely, for that exact
reason. I use an internal DNS server the forwards to an DOH upstream server.
No more rogue devices trying to use their own dns, at least until they all
switch to DOH too

~~~
nikeee
I also blocked port 53 in my firewall (except for the Pihole; no DoH there).
After that, I noticed that some applications have some DNS servers hard-coded.
8.8.8.8 being pretty prominent.

My solution was to assign the Pihole the IP address 8.8.8.8 as well. Then I
added a static route in at the router to route 8.8.8.8 to the Pihole. Now
every request to dns.google will also be handled by pihole instead of getting
timeouts.

------
padde
It would be interesting to have an estimate of the energy consumed (globally)
by this Chrome/Chromium feature...

~~~
jeffbee
I'm curious why you think this might be significant. All global root server
traffic amounts to < 1gbps. Under contrived conditions you could easily serve
it all from a single laptop computer, but even if we assume that realistically
it's being served by a large, distributed collection of servers each drawing
~250 W continuously and each housed in one of those ridiculous corporate
datacenters with a PUE over 2.0, you're still looking at a global energy cost
comparable to one tankful of motor fuel per day, or much less than the energy
used by one single commercial airplane.

------
malkia
You can see the code online through the CS browser -
[https://source.chromium.org/chromium/chromium/src/+/master:c...](https://source.chromium.org/chromium/chromium/src/+/master:chrome/browser/intranet_redirect_detector.cc;l=148?q=%22we%20generate%20a%20random%20hostname%22&ss=chromium)

------
tinus_hn
Fallout from the ISPs effort to hijack failed DNS queries.

------
lapcatsoftware
On macOS you can block these with the excellent product Little Snitch.

I've got several rules for Google Chrome in Little Snitch that seem to do the
trick. Deny outgoing UDP connections, and Deny outgoing TCP connections to
port 80 for the IP addresses and domain for my ISP. You can see these if you
monitor traffic.

------
kevincox
It seems like they could rotate these much less frequently to let caches work.
It seems that these are random to avoid DNS servers hardcoding a response for
them. However they could be pseudo random based on the current day, month or
release so that it would be hard enough to intercept them (unless the DNS
server was really committed to doing this, but there are other ways to achieve
this) while still allowing a lot of caching.

I think the only downside is that you would leak some information about your
system clock.

~~~
majewsky
> It seems that these are random to avoid DNS servers hardcoding a response
> for them. However they could be pseudo random based on [the current date and
> browser release]

That would still allow ISPs to compute the limited number of domains for which
NXDOMAIN would need to be sent at any given point in time.

(Whether they'd do it is another story. The random pattern currently used by
Chrome looks like it may still be easily detectable at the DNS-recursor level,
so maybe the ISPs really don't bother beyond the simple NXDOMAIN -> portal
domain replacement.)

~~~
kevincox
As I said, if they make specific effort they will succeed. The current scheme
can be broken by returning a number of different IPs instead of one or two. I
think my proposal has a nice balance between making ISPs put in non-trivial
effort and not putting a lot of load on the root servers.

~~~
aaronAgain
This is a classic arms race. The hijackers back off for a while, but as is
always the case in low-margin, low-regulation, low-consequence environments,
bad actors will present a way to skim a tiny value out a massive amount of
transactions. Give a percentage of that to the network operator, and take the
rest home.

The network operators enable this behavior. It would be next to impossible for
it to be useful (ROI wise) if they didn't intentionally support it with access
to their networks. It doesn't need to be an arms race, but we refuse to
regulate or punish anyone in this space. We waste massive amounts of resources
detecting and counteracting the hijacking services. The human (developer) cost
is where the big waste is here, not electricity.

and the fight goes on....

------
jiggawatts
I'm curious to know how much data the root namespace servers put out in terms
of gbps, but this doesn't seem to be public information.

~~~
CKN23-ARIN
[https://root-servers.org/](https://root-servers.org/)

Select a root server at the bottom. Some, but not all, have a "statistics"
link. Seems to be stated in qps and message size distribution, but you should
be able to derive traffic volume from that.

~~~
steventhedev
Assuming the mean is close to the median, they are reporting ~10B requests
daily with a median response size of around 1KB. 10TB daily is a little under
1Gpbs. Traffic is spiky, but this isn't particularly complex once you consider
they have multiple data centers/servers. Of course, I may have misread
something as daily that was hourly or something like that...

~~~
jiggawatts
So it looks like the root namespace providers output a totally reasonable
amount of traffic. Divided between the hundreds of points of presence
globally, this is tens of megabits per physical host.

This FAQ is illuminating: [https://www.verisign.com/en_US/domain-
names/internet-resolut...](https://www.verisign.com/en_US/domain-
names/internet-resolution/node-hosting/index.xhtml)

The servers themselves are ordinary 1 RU physical rack mount servers with 1
Gbps or 10 Gbps Ethernet. Nothing special. I'm guessing that most of the load
isn't from the root, e.g.: "j.root-servers.net", but from hosting the
authoritative DNS servers for .com and .net (b.gtld-servers.net) on the same
box. That would surely have more traffic and much more data.

~~~
moonchild
Reasonable _quantity_ of traffic, but they have to be very reliable.

~~~
londons_explore
No need for reliability when there is 26 way failover...

------
csagan5
ungoogled-chromium[1] and Bromite[2] have had a patch to disable this for a
while now

[1] [https://github.com/Eloston/ungoogled-
chromium/blob/14fb2b0/p...](https://github.com/Eloston/ungoogled-
chromium/blob/14fb2b0/patches/extra/ungoogled-chromium/disable-intranet-
redirect-detector.patch)

[2]
[https://github.com/bromite/bromite/blob/410fc50/build/patche...](https://github.com/bromite/bromite/blob/410fc50/build/patches/ungoogled-
chromium-Disable-intranet-redirect-detector.patch)

------
jacobsenscott
I can't get past the `size_t i` rather than `int i` in the first loop. Why. I
suppose it is some type of defensive programming.

~~~
kevin_thibedeau
Bit flip changes an int to a large negative value. Now you're stuck doing a
signed comparison for a while.

------
1vuio0pswjnm7
Why does Chrome (Google) need to know whether DNS is being intercepted? What
actions does Google take based on the answer?

Note that under this crude test of sending queries for unregistered domains, a
user who administers their own DNS could be indistingushiable from "DNS
interception" by an ISP or other third party.

I administer my own DNS. I do not use third party DNS. These random queries
would just hit my own DNS servers, not the root servers.

~~~
jve
From article:

> Users on such networks might be shown the “did you mean” infobar on every
> single-term search. To work around this, Chromium needs to know if it can
> trust the network to provide non-intercepted DNS responses.

Don't know if this is the sole reason.

~~~
1vuio0pswjnm7
I think you are right.

Reminds me of the story behind "Google Public DNS". Back in 2008/2009, OpenDNS
was hijacking "queries" (NXDOMAIN) typed in the address bar to their own
search page ("OpenDNS Guide", or some such) on an opendns.com subdomain. In
response, Google launched its own open resolver.^1 (OpenDNS was later acquired
by Cisco)

1\. [http://umbrella.cisco.com/blog/opendns-google-
dns](http://umbrella.cisco.com/blog/opendns-google-dns)

------
alpb
I think I've understood the most of the article but I missed the initial part.
Why is there a probe in Chrome that uses DNS to query random 7-15 character
long hostnames, only to get NXDOMAIN and burden the root nameservers? What
does this probe achieve?

~~~
kevingadd
Some DNS providers (like ISPs) will hijack NXDOMAINs and redirect you to ads
or stuff like that. Chrome wants to detect that.

~~~
dylz
There was a point where, at least in the US, this was standard behaviour for
virtually every single major ISP and mobile provider. Several used to hijack
_all_ port 53 traffic to disallow you from using anything but their resolver.

~~~
ananonymoususer
And for those who don't understand why this is a bad thing, I will present my
own use case. I run pi-hole at home and frequently work from there for another
company. That company has provided me with a laptop that uses Cisco's DNS
"Umbrella", which is some sort of security feature:
[https://docs.umbrella.com/deployment-umbrella/docs/point-
you...](https://docs.umbrella.com/deployment-umbrella/docs/point-your-dns-to-
cisco) Because my company laptop doesn't pay attention to the DNS servers
recommended by DHCP, and ignores the local domain search TLD, if I try to ssh
into a machine on my local network (without a FQDN) from the company laptop,
it replaces the local search domain with the corporate domain, then does the
lookup, and gets an A record from Umbrella that is not on my local network. It
makes the ssh connection and (surprisingly) reaches an ssh server, which asks
me for my password. The login fails, and my password (in plain text) could
very well have been harvested by the ssh server on the catchall host. Now you
are going to tell me that I shouldn't use ssh passwords, and should instead be
using RSA keys for ssh. Regardless of what the NSA tells you, THIS IS ALWAYS A
BAD IDEA because once any account is compromised, ALL OTHER ACCOUNTS with
locally stored keys ARE ALSO COMPROMISED.

Sorry for the rant, but wildcard catchall DNS is a REALLY BAD THING.

~~~
zootboy
> THIS IS ALWAYS A BAD IDEA because once any account is compromised, ALL OTHER
> ACCOUNTS with locally stored keys ARE ALSO COMPROMISED.

This is not universally true. If you generate separate private keys for each
server-client pair, compromising one private key will limit the damage to just
the one server.

~~~
ananonymoususer
That is just not true. It may be the case if the key itself is compromised,
but consider that you may have many different accounts scattered on different
servers. Once one of them is compromised, the attacker now has access to every
other account because they are all chained together.

~~~
shawnz
Can you describe the attack scenario you're imagining in a bit more detail?
Because that doesn't sound possible to me.

------
lionkor
Why is the C++ code labelled to be coming from some file .c?

------
stefan_
Why on earth is there someone with shell access to the DNS root zone and
running tcpdump?

~~~
pilif
how would they maintain the root servers and correct issues without shell
access or tcpdump? Make blind guesses and restart the server until the problem
goes away (it won't)?

No matter how high-profile the environment, eventually, the rubber will hit
the road and _some_ human will be in a privileged position to be able to fix a
problem.

That is true for every single service out there. Yes. Including Gmail.
Including AWS. Including Twitter. Everywhere.

Depending on size and profile of the service it's more or less people in need
of jumping through more or less hoops to get there, but this _must_ be true
for _any_ service.

Always keep this in mind when you make the decision to move your data to a
cloud service.

~~~
stefan_
Why is a server with a problem still part of the root zone? And no, this is
absolutely not the case for _serious_ operators. Access to production systems
is highly regulated.

~~~
nickelpro
Yes, highly regulated access with lots of hoop jumping, that's what they said.
And there exists a person who has jumped through all the hoops and has that
access. And that hoop jumping person ran tcpdump on the root server.

------
mschuster91
The worst thing is, this will not even detect a well written NXDOMAIN
interceptor that only hijacks requests to valid top level domains.

It's about time for DNSSEC to be available on all TLDs and for browsers to nag
if it is broken.

~~~
tptacek
As I wrote above: DNSSEC can't do anything about unsigned zones, and the
overwhelming majority of zones in both the North American Internet and in
popular domain lists like the Moz 500 are unsigned, and will remain unsigned,
despite almost a decade of pleading from DNSSEC advocates to recant.

What's crazy about this is that there's a trivial solution to forged NXDOMAIN
responses that people can adopt immediately: just DoH to a provider that
doesn't forge NXDOMAIN responses (none of the major providers do).

I sometimes wonder whether the vehemence of the anti-DoH advocacy is rooted in
concern that it will cause DNSSEC to lose yet another potential motivating use
case.

~~~
aaronAgain
I've never looked at DOH as an attack on DNSSEC, though I suppose you could. I
think the resistance is more about the big corporate and the Internet level
DNS operators like Google's 8.8.8.8, they want to be able to manipulate DNS
responses when necessary. I know, evil corporate IT Ops hijacking my HNN
connection. No, not that.

Think about a coordinated effort by top tier DNS providers globally to stop a
giant bot network by simultaneously 'hijacking' DNS responses for the command
and control server host-names. In classic DNS this is easy, just intercept the
requests at the LDNS provider and return a dummy server IP, all good.

That falls apart with DOH and DNSSEC. With DNSSEC you cannot forge a response
to a client that strictly expects signed responses for a particular zone. And
with DOH, the various corporate IT shops cannot inspect and 'hijack' the
responses. Though, the DOH operator can still change the response. But that
moves the capability outside of local corporate IT and into a multinational
company that might not agree with your request to 'fix' a problem via assisted
DNS hijacking.

So all of these new, safer DNS delivery methods do legitimately impact the
ability of "good"* operators to protect the Internet. Is the trade off worth
it to protect users DNS traffic versus being able to respond to threats? I
think that protecting users daily traffic is net-net better as it is a steady
state problem and state sponsored actors have the resources to subvert a
population via DNS. But I also feel the loss of a tool to protect users at the
same time. Things like this are never zero-sum.

Disclaimer: I work for Microsoft and although I don't operate DNS services as
part of my job, I have spent a lot of time on this particular topic over the
years. These are my opinions, not the companies. I welcome challenges to my
opinions, that's how I learn.

*"good" is always a situational thing.

~~~
pvg
Losing the ability to do this very specific mitigation seems a tiny price to
pay for not having everybody's DNS requests have zilch for transit privacy and
integrity all the time.

------
elric
I don't get this feature. And I really hate that it's present in pretty much
every browser these days. If I want to type an URL, I'll use the address bar.
If I want to search, I'll use the search bar. Different bars with different
keyboard shortcuts and different purposes. Why do so many browsers merge these
two? Screens are insanely wide these days, so screen real estate can't be the
reason. Are we trying to trick users into thinking that URLs aren't a thing
anymore?

Maybe this "omnibox" doesn't know whether I want to enter a hostname or a
search term, but _I_ do.

~~~
tasogare
> Screens are insanely wide these days

This is not relevant for URL or search bars since they need to be displayed
horizontally. Separate bars means less vertical screen space, which is still
scarce.

~~~
ebg13
> _This is not relevant for URL or search bars since they need to be displayed
> horizontally_

Need? Has anyone tried?

~~~
Arnt
I've seen it, back around 1999, possibly konqueror? Something that let you
drag around toolbars and if you moved the address bar to the left/right side
it would change the direction of writing.

Let's say that testing it briefly was enough. Editing tilted text works up to
around 45 degrees, steeper than that is a strain.

