Notes to self, 2024
2024-05-03 - mysql binlog replay / max_allowed_packet
Trying to replay MySQL binlogs? And running into max_allowed_packet errors? Fear not. Likely this is not corruption, but packets that really are that big.
In 2021 I wrote about mariabackup and selective table restore. That blog entry shows how one might restore a mariabackup-saved snapshot of a MariaDB (MySQL) database. Specifically, it focuses on recovering only one or two tables selectively and it details optional GPG decryption and decompression.
Here's a recap of the mandatory steps for full recovery and additionally how to replay binlogs to a specific point in time.
Recap snapshot restore
Assume we're starting with a directory with the snapshot of the DB, already decompressed. It might look like this:
$ ls -F1 /path/to/snapshot aria_log.00000001 aria_log_control backup-my.cnf ib_buffer_pool ibdata1 ib_logfile0 my_project_1/ my_project_2/ my_project_3/ mysql/ performance_schema/ xtrabackup_binlog_info xtrabackup_checkpoints xtrabackup_info
Move the log to /var/log/mysql
, the rest to
/var/lib/mysql
and give the files the correct ownership
(mysql:mysql
).
# mv /path/to/snapshot/ib_logfile* /var/log/mysql/ # mv /path/to/snapshot/* /var/lib/mysql/ # chown -R mysql:mysql /var/lib/mysql/ /var/log/mysql/
From there, become the mysql user and “prepare” the data.
$ cd /var/lib/mysql $ mariabackup --prepare --use-memory=20G --target-dir=.
This might take some time, depending on how large the snapshot is.
Then, start the database daemon — optionally with
--skip-grant-tables
if you don't have credentials on hand.
NOTE: If you're restoring something live, make sure clients cannot start connecting before you're all done with the restore.
Replaying binlogs
Time to replay the binlogs.
$ ls -F1 /path/to/binlogs mariadb-bin.079217 mariadb-bin.079218 mariadb-bin.079219 mariadb-bin.079220 mariadb-bin.079221 mariadb-bin.079222
Check in xtrabackup_binlog_info
where we should start.
$ cat /var/lib/mysql/xtrabackup_binlog_info mariadb-bin.079219 93481903 0-1-2297759094
Dump the binlogs to stdout and feed that to mysql. You want
--binary-mode
in case there are binary blobs in them.
(ASCII '\0' appeared in the statement, but this is not allowed
unless option --binary-mode is enabled and mysql is run in
non-interactive mode.
)
$ mysqlbinlog --start-position=93481903 \ /path/to/binlogs/mariadb-bin.{079219,079220,079221,079222} | \ mysql --binary-mode
If all goes well, this replays all the log statements from the binlog, starting from the last position seen in the snapshot.
... unless you run into this:
ERROR 2006 (HY000) at line 21267838: MySQL server has gone away
... and see this in the error.log
:
[Warning] Aborted connection 9 to db: 'unconnected' user: 'root' host: 'localhost' (Got a packet bigger than 'max_allowed_packet' bytes)
Darn. An error, and we don't even know which binlog caused it.
And also: why would packets be bigger than they were when they were originally ran? If the MariaDB configuration is unchanged, then surely replaying should work equally well as the original playing. Right?
Wrong.
binlog_format
The binary log (binlog) records changes in the database either as statements or as changed rows. Recording statements can be cheaper in terms of space but riskier in the face of simultaneous updates. Recording changed rows is cheaper to process (because indexed) and double checks data integrity across nodes (if there's an update to a row you didn't have, you can abort early and tell the user to fix the corruption).
If we look at a binlog by passing it to mysqlbinlog
, an
UPDATE
might look like this:
UPDATE aboutconfig_item SET datatype_id = 2 WHERE datatype_id = 1
That's about 65 bytes of packet size. And for
binlog-format=STATEMENT
style binary logging, that is
about all the space it takes.
But that simple query can affect many rows, and when they are logged
as binlog-format=ROW
(and sometimes MIXED
)
they are instead recorded as follows:
# at 4112460 #240505 14:51:46 server id 1 end_log_pos 4112460 CRC32 0x91652288 Annotate_rows: #Q> UPDATE aboutconfig_item SET datatype_id = 2 WHERE datatype_id = 1 #240505 14:51:46 server id 1 end_log_pos 4112537 CRC32 0x55707891 Table_map: `example_db`.`aboutconfig_item` mapped to number 799 # at 4112537 # at 4120607 # at 4128557 #240505 14:51:46 server id 1 end_log_pos 4120607 CRC32 0xcd652823 Update_rows: table id 799 #240505 14:51:46 server id 1 end_log_pos 4128557 CRC32 0xebf94293 Update_rows: table id 799 #240505 14:51:46 server id 1 end_log_pos 4133783 CRC32 0x0275ce04 Update_rows: table id 799 flags: STMT_END_F BINLOG ' 4oA3ZhMBAAAATQAAAJnAPgAAAB8DAAAAAAEACmV4YW1wbGVfZGIAEGFib3V0Y29uZmlnX2l0ZW0A BRISAw/8BQAA/AAEAJF4cFU= 4oA3ZhgBAAAAhh8AAB/gPgAAAB8DAAAAAAAABf//4JmIiOkPmYiI6Q8CAAAADGF1dG9sb2cucGF0 ... HGludm9pY2UucGF5bWVudC10ZXJtLmRlZmF1bHQCAAAAMzDoGS0h '/*!*/; # Number of rows: 54 # at 4112299 #240505 14:51:43 server id 1 end_log_pos 4112330 CRC32 0xc9363528 Xid = 271936 COMMIT/*!*/;
The #Q>
is purely a comment. The BINLOG
statement holds the data of all affected rows. And its
“packet size” is significantly larger than 65 bytes.
If you add verbosity (-v
) to mysqlbinlog
you'll also see:
### UPDATE `example_db`.`aboutconfig_item` ### WHERE ### @1='2011-03-04 14:36:15' ### @2='2011-03-04 14:36:15' ### @3=2 ### @4='autolog.path' ### @5='/srv/data/example/logs' ### SET ### @1='2011-03-04 14:36:15' ### @2='2011-03-04 14:36:15' ### @3=1 ### @4='autolog.path' ### @5='/srv/data/example/logs'
This means you get one of those — fairly efficiently stored, but still — for every one of those 54 affected rows. You can see how that might add up if you're doing a coarse update on a big table.
A statement (query) might be large, like an UPDATE
with a very long list of WHERE id IN (1, 2, 3, 4, .... 99999)
.
But that is nothing compared to the size of affected data for some queries.
This toy example of 54 rows already produces a BINLOG
statement of about 30 kilobytes.
max_allowed_packet
When you're using Galera wsrep replication — like we
do — then that replication also uses a form of row based
replication. The changes get recorded in the binlog as those
BINLOG
statements we saw above. And, as we've just seen, those
statements can be quite large. And when feeding those statements
to the mysql
client and the mysqld
server
they are bound by their respective max_allowed_packet
settings.
During regular operation, wsrep itself is not limited by that setting. However, when you're replaying binlogs, the setting suddenly becomes important.
After running into the Got a packet bigger than
'max_allowed_packet' bytes
error, we whipped up a little script
to run on the binlogs that appeared to be problematic:
#!/usr/bin/env python3 # Take mysqlbinlog input. Show longest lines (queries) and longest blocks of # data (rows to update). Set max_allowed_packet to a higher value. import sys from collections import namedtuple Stats = namedtuple('Stats', 'line blocksize linelength') def numsep(value): "1234567 -> '1_234_567'" value = str(value)[-1::-1] return '_'.join(value[i:i+3] for i in range(0, len(value), 3))[-1::-1] def count(fp): blockstart = linelength = blocksize = -1 for lineno, line in enumerate(fp, start=1): if line.startswith(b'# at '): if blocksize != -1: yield Stats( line=blockstart, blocksize=blocksize, linelength=linelength) blockstart = lineno blocksize = 0 blocksize += len(line) linelength = max(linelength, len(line)) if blocksize != -1: yield Stats( line=blockstart, blocksize=blocksize, linelength=linelength) def main(fp): maxblocksize, maxlinelength = -1, -1 maxblocksizepos, maxlinelengthpos = -1, -1 for stats in count(fp): if stats.blocksize > maxblocksize: maxblocksize = stats.blocksize maxblocksizepos = stats.line if stats.linelength > maxlinelength: maxlinelength = stats.linelength maxlinelengthpos = stats.line print('largest block at line', maxblocksizepos, ':', numsep(maxblocksize)) print('longest line at line', maxlinelengthpos, ':', numsep(maxlinelength)) if __name__ == '__main__': if sys.stdin.isatty(): print('please feed me mysqlbinlog output', file=sys.stderr) exit(1) main(sys.stdin.buffer)
Running this on the aforementioned files got us this:
$ mysqlbinlog /path/to/binlogs/mariadb-bin.079219 | ./binlogstats.py largest block at line 13099650 : 5_655_846 longest line at line 13099646 : 4_186_803
$ mysqlbinlog /path/to/binlogs/mariadb-bin.079220 | ./binlogstats.py largest block at line 10869443 : 130_846_886 longest line at line 17253384 : 4_708_572
Some of these statements might be very long — up to 5 megabytes — but the row updates can apparently be huge — up to 130 megabytes.
In this case, we determined that slightly over 130MiB was the
max_allowed_packet
we needed, significantly more than the value
we had configured. And indeed, increasing it to 256MiB for good
measure made the replay succeed.
2024-04-17 - mariadb check table / galera locking
After upgrading some database nodes from MariaDB 10.3 to 10.6 we encountered some issues with tables not being fully correct. We'd like to CHECK (and maybe REPAIR) TABLE the entire database. But the database must not block queries on other nodes of the cluster.
The corruption we saw might have been a small corruption that had crept in during even earlier upgrades, we don't know. But we do know that we'd like to get this sorted before the corruption gets worse after further upgrades.
The requirement that the “show must go on” is obvious. Because we always have three nodes in a cluster, we can safely take one out and do table checks and repairs on a single one.
But, it has to be back in the cluster in time to still get an IST. If it is so far out of sync that it has to do an SST, the whole (repair) exercise is pointless. (Okay — not entirely pointless — we might see the corruption, but we won't have fixed it.)
The following helper script makes repairing/checking easier by running a single CHECK TABLE and then waiting for the node to catch up before doing the next table.
The special spice in this script is wsrep_desync=ON. Without it the node would still accept wsrep updates and block. A DDL statement like ALTER TABLE does not require it to desync itself, but CHECK TABLE does.
With a small adaptation, you could use this script to ALTER TABLE FORCE instead. A lengthier operation that allows you to reclaim space as well. This optional change is left to the reader.
#!/bin/sh # # This script does a CHECK TABLE on all tables of a MariaDB server, # but it waits for the node to catch up (Sync) after each table. # # When a MariaDB server is part of a Galera cluster, it is allowed # to go out-of-sync -- assuming you ensure that regular clients are # disallowed access during this time. # # But if you run a CHECK TABLE, it LOCKs by default, instead of going # out-of-sync. The wsrep_OSU_method=RSU + wsrep_desync=ON settings # here make sure that the other nodes in the Galera cluster can # continue operating. # # Usage: # # mariadb-galera-check-all-tables 2>&1 | tee output.log # # It will write a mariadb-galera-check-all-tables.table_cache file, # iterate over it, CHECK TABLE for each table, and wait for it to # go back to synced state. # # Afterwards you can check the log for issues found and repairs done. # # (ossobv/2024-04-17) # Statements to allow us to go out of sync. BEGIN_WORK='SET SESSION wsrep_OSU_method=RSU; SET GLOBAL wsrep_desync=ON' END_WORK='SET SESSION wsrep_OSU_method=TOI; SET GLOBAL wsrep_desync=OFF' sql() { # If needed, you can add appropriate auth flags here. mysql -NBe "$1" } when() { date --rfc-3339=seconds } # Get tables into a file, use get_tables() to get them. make_tables() { # Repairing appears to work fine on: # - InnoDB, MyISAM, Aria, CSV # Not fine on: # - PERFORMANCE_SCHEMA (storage engine doesn't support check) sql ' SELECT CONCAT("`", table_schema, "`.`", table_name, "`") as tbl FROM information_schema.tables WHERE table_type = '\''BASE TABLE'\'' AND engine NOT IN ('\''PERFORMANCE_SCHEMA'\'') ORDER BY table_schema, table_name;' \ >"$0.table_cache" } # Use intermediate function, in case you want to alter the tables # or manually set them. get_tables() { cat "$0.table_cache" } check_table() { local table="$1" echo -n "$(when): CHECK TABLE $table ... " # Do a CHECK TABLE + ANALYZE TABLE for good measure. local work="CHECK TABLE $table; ANALYZE TABLE $table" # Wrapped in $() to collapse whitespace/linefeeds. echo $(sql "$BEGIN_WORK; $work; $END_WORK;") } # Check galera Write Set REPlication status; # return OK when in Sync or ERROR if not. galera_is_in_sync() { sql "SHOW STATUS LIKE 'wsrep_local_state_comment';" | grep -q '[[:blank:]]Synced$' } # Make tables (comment if you're editing the tables file) make_tables #exit # uncomment to just produce the tables file # Make sure the server can go back to Synced state # (might be invoked twice, but it's idempotent) trap 'sql "$END_WORK;"; echo; echo "$(when): $END_WORK"' EXIT INT # Loop over tables from the cache file get_tables | while read -r table; do check_table "$table" || break while ! galera_is_in_sync; do echo "$(when): (not in sync yet)" sleep 10 || break done done
Run this for each MariaDB node in your Galera cluster, one at a time, while they are not taking client connections. And sleep better, knowing that all corruption is gone.
2024-03-29 - systemd-networkd-wait-online / stalling and failing
systemd-networkd-wait-online.service failing? Maybe it's IPv6.
Do you have a list of failed systemd units that looks like this?
# systemctl list-units --failed UNIT LOAD ACTIVE SUB DESCRIPTION ● systemd-networkd-wait-online.service loaded failed failed Wait for Network to be Configured
Then that could be due to IPv6 networking not being set up properly.
Check the interfaces list:
# ip -br a lo UNKNOWN 127.0.0.1/8 ::1/128 ens18 UP 10.20.30.41/31 fe80::a124:092f:fa18:109e/64 ens19 UP 192.168.1.3/31 fe80::a124:092f:fa15:01d5/64
Proper IPv4 IPs, and only link-local IPs for IPv6? Likely you're not using IPv6 at all here.
The logs have this to say:
# journalctl -u systemd-networkd*service 14:25:58 systemd[1]: Starting Network Configuration... 14:25:58 systemd-networkd[1728946]: ens19: Link UP 14:25:58 systemd-networkd[1728946]: ens19: Gained carrier 14:25:58 systemd-networkd[1728946]: ens18: Link UP 14:25:58 systemd-networkd[1728946]: ens18: Gained carrier 14:25:58 systemd-networkd[1728946]: lo: Link UP 14:25:58 systemd-networkd[1728946]: lo: Gained carrier 14:25:58 systemd-networkd[1728946]: ens19: Gained IPv6LL 14:25:58 systemd-networkd[1728946]: ens18: Gained IPv6LL 14:25:58 systemd-networkd[1728946]: Enumeration completed 14:25:58 systemd[1]: Started Network Configuration. 14:25:58 systemd[1]: Starting Wait for Network to be Configured... 14:27:58 systemd-networkd-wait-online[1728947]: Timeout occurred while waiting for network connectivity. 14:27:58 systemd[1]: systemd-networkd-wait-online.service: Main process exited, code=exited, status=1/FAILURE 14:27:58 systemd[1]: systemd-networkd-wait-online.service: Failed with result 'exit-code'. 14:27:58 systemd[1]: Failed to start Wait for Network to be Configured. 14:41:05 systemd-networkd[1728946]: ens19: Re-configuring with /run/systemd/network/10-netplan-ens19.network 14:41:05 systemd-networkd[1728946]: ens19: DHCPv6 lease lost 14:41:05 systemd-networkd[1728946]: ens18: Re-configuring with /run/systemd/network/10-netplan-ens18.network 14:41:05 systemd-networkd[1728946]: ens18: DHCPv6 lease lost 14:41:05 systemd-networkd[1728946]: ens19: Re-configuring with /run/systemd/network/10-netplan-ens19.network 14:41:05 systemd-networkd[1728946]: ens19: DHCPv6 lease lost 14:41:05 systemd-networkd[1728946]: ens18: Re-configuring with /run/systemd/network/10-netplan-ens18.network 14:41:05 systemd-networkd[1728946]: ens18: DHCPv6 lease lost 14:41:12 systemd[1]: Starting Wait for Network to be Configured... 14:43:12 systemd-networkd-wait-online[1733339]: Timeout occurred while waiting for network connectivity. 14:43:12 systemd[1]: systemd-networkd-wait-online.service: Main process exited, code=exited, status=1/FAILURE 14:43:12 systemd[1]: systemd-networkd-wait-online.service: Failed with result 'exit-code'. 14:43:12 systemd[1]: Failed to start Wait for Network to be Configured.
What's in that 10-netplan-ens18.network
?
# cat /run/systemd/network/10-netplan-ens18.network [Match] Name=ens18 [Network] LinkLocalAddressing=ipv6 Address=10.20.30.41/31 DNS=1.1.1.1 DNS=8.8.8.8 [Route] Destination=0.0.0.0/0 Gateway=10.20.30.40
Again, only IPv4 addresses.
Solution
Add link-local: [ipv4]
to the interfaces in the Netplan configuration:
--- a/netplan/01-netcfg.yaml +++ b/netplan/01-netcfg.yaml @@ -11,9 +11,11 @@ network: addresses: - 1.1.1.1 - 8.8.8.8 + link-local: [ipv4] ens19: addresses: - 192.168.1.3/31 routes: - to: 192.168.1.0/24 via: 192.168.1.2 + link-local: [ipv4]
Now run a netplan apply
and observe:
# systemctl restart systemd-networkd-wait-online.service
That succeeds immediately, like it should.
In 10-netplan-ens18.network
(and the others), LinkLocalAddressing=ipv6
is now LinkLocalAddressing=ipv4
.
And the logs also appear happier:
# journalctl -u systemd-networkd*service 14:54:46 systemd-networkd[1728946]: ens19: Re-configuring with /run/systemd/network/10-netplan-ens19.network 14:54:46 systemd-networkd[1728946]: ens19: DHCPv6 lease lost 14:54:46 systemd-networkd[1728946]: ens18: Re-configuring with /run/systemd/network/10-netplan-ens18.network 14:54:46 systemd-networkd[1728946]: ens18: DHCPv6 lease lost 14:54:46 systemd-networkd[1728946]: ens19: Re-configuring with /run/systemd/network/10-netplan-ens19.network 14:54:46 systemd-networkd[1728946]: ens18: Re-configuring with /run/systemd/network/10-netplan-ens18.network 14:54:52 systemd[1]: Starting Wait for Network to be Configured... 14:54:53 systemd[1]: Finished Wait for Network to be Configured.
2024-03-22 - nmap ssl-enum-ciphers / haproxy / tls / no results
When doing an nmap ssl-enum-ciphers scan on a Haproxy machine, I got zero working ciphers. But connecting with TLS worked just fine. What is up?
While updating TLS ciphers to disable certain unsafe ciphers, we wanted to test the current offers. nmap with the ssl-enum-ciphers script should work fine for this, and it is reasonably fast:
$ nmap -Pn --script=ssl-enum-ciphers -p 443 google.com ... 443/tcp open https | ssl-enum-ciphers: | TLSv1.0: ... | TLSv1.1: ... | TLSv1.2: | ciphers: | TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (ecdh_x25519) - A ... | warnings: | 64-bit block cipher 3DES vulnerable to SWEET32 attack |_ least strength: C Nmap done: 1 IP address (1 host up) scanned in 1.27 seconds
Okay, that is convenient. On to the relevant host.
On the host, we've configured TLSv1.2 and TLSv1.3 with a limited set of ciphers. Let's go:
$ nmap -Pn --script=ssl-enum-ciphers -p 443 x.x.x.x ... 443/tcp open https
Nothing?!
Apparently the nmap 7.91 scripts on Ubuntu/Jammy are a bit outdated and have no TLSv1.3 support. It was added in nmap commit 7c61f7c.
We can patch our Ubuntu/Jammy nmap with this adapted patch:
nmap-ssl-enum-ciphers-tls13-for-ubuntu-jammy.patch
$ cd /usr/share/nmap $ sudo patch -p1 < /PATH/TO/nmap-ssl-enum-ciphers-tls13-for-ubuntu-jammy.patch
Try again:
$ nmap -Pn --script=ssl-enum-ciphers -p 443 $haproxy ... 443/tcp open https | ssl-enum-ciphers: | TLSv1.3: | ciphers: | TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A | TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A | TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A | cipher preference: client |_ least strength: A
Nice! TLSv1.3 results. But where is the TLSv1.2?
We know that the script supports TLSv1.2, as seen with the google.com example above. In fact, rerunning it now shows that google.com also does TLSv1.3, as you'd expect.
The Haproxy config looks sane:
global # "server ciphers" ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305 ssl-default-bind-ciphersuites TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256:TLS_CHACHA20_POLY1305_SHA256 ssl-default-bind-options prefer-client-ciphers no-tls-tickets ssl-min-ver TLSv1.2 ssl-max-ver TLSv1.3
And openssl s_client -no_tls1_3 -connect
will happily connect with TLSv1.2.
If we try a kube-apiserver with the same set of ciphers configured, it does yield both TLSv1.2 and TLSv1.3:
# kube-apiserver [...] --tls-cipher-suites=\ TLS_AES_256_GCM_SHA384,TLS_AES_128_GCM_SHA256,TLS_CHACHA20_POLY1305_SHA256,\ TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
$ nmap -Pn --script=ssl-enum-ciphers -p 443 $kube-apiserver ... 443/tcp open https | ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 - unknown | TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 - unknown | TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - unknown | compressors: | NULL | cipher preference: server | TLSv1.3: | ciphers: | TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A | TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A | TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A | cipher preference: server |_ least strength: unknown
So, something is different with Haproxy.
The openssl s_client
output did not give many clues, but there was one:
Client Certificate Types: RSA sign, ECDSA sign
for the
kube-apiserver and
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
for Haproxy.
And indeed, there is a related fix in nmap commit 034ea73.
Apply the following patch as well:
nmap-ssl-enum-ciphers-tls12-ecdsa-in-ubuntu-jammy.patch
$ cd /usr/share/nmap $ sudo patch -p1 < /PATH/TO/nmap-ssl-enum-ciphers-tls12-ecdsa-in-ubuntu-jammy.patch
And now a rescan finally shows both TLS protocols on Haproxy as well.
$ nmap -Pn --script=ssl-enum-ciphers -p 443 $haproxy ... 443/tcp open https | ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A | TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A | TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 (secp256r1) - A | compressors: | NULL | cipher preference: client | TLSv1.3: | ciphers: | TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A | TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A | TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A | cipher preference: client |_ least strength: A
(Aha! That “unknown” is now gone too.)
2024-01-23 - gpg-agent / ssh / ed25519 / agent refused
After putting in all the work to get ED25519 OpenPGP keys in my Yubikey smart card, I was slightly frustrated that the SSH support "sometimes" didn't work.
I thought I had tested that it worked, but today it didn't.
$ ssh my_server sign_and_send_pubkey: signing failed for ED25519 "cardno:000612345678" from agent: agent refused operation walter@my_server: Permission denied (publickey).
That's odd.
Adding some debug to the gpg-agent — debug 1024
in gpg-agent.conf — got me this:
DBG: chan_10 -> SERIALNO DBG: chan_10 <- S SERIALNO D2760001240103040006123456780000 DBG: chan_10 <- OK DBG: chan_10 -> GETINFO card_list DBG: chan_10 <- S SERIALNO D2760001240103040006123456780000 DBG: chan_10 <- OK DBG: chan_10 -> SERIALNO --demand=D2760001240103040006123456780000 DBG: chan_10 <- S SERIALNO D2760001240103040006123456780000 DBG: chan_10 <- OK DBG: chan_10 -> GETATTR $AUTHKEYID DBG: chan_10 <- S $AUTHKEYID OPENPGP.3 DBG: chan_10 <- OK DBG: chan_10 -> GETATTR SERIALNO DBG: chan_10 <- S SERIALNO D2760001240103040006123456780000 DBG: chan_10 <- OK DBG: chan_10 -> READKEY OPENPGP.3 DBG: chan_10 <- [ 44 20 28 31 ...(83 byte(s) skipped) ] DBG: chan_10 <- OK DBG: chan_10 -> GETATTR $DISPSERIALNO DBG: chan_10 <- S $DISPSERIALNO 000612345678 DBG: chan_10 <- OK ssh request handler for request_identities (11) ready ssh request handler for sign_request (13) started DBG: chan_10 -> SERIALNO --demand=D2760001240103040006123456780000 DBG: chan_10 <- S SERIALNO D2760001240103040006123456780000 DBG: chan_10 <- OK detected card with S/N D2760001240103040006123456780000 DBG: chan_10 -> SETDATA 3021...> DBG: chan_10 <- OK DBG: chan_10 -> PKAUTH D459... scdaemon[898750]: app_auth failed: Invalid value DBG: chan_10 <- ERR 100663351 Invalid value <SCD> smartcard signing failed: Invalid value ssh sign request failed: Invalid value <SCD>
When I tried a different server, things did work:
... DBG: chan_10 -> SERIALNO --demand=D2760001240103040006114652250000 DBG: chan_10 <- S SERIALNO D2760001240103040006114652250000 DBG: chan_10 <- OK detected card with S/N D2760001240103040006114652250000 DBG: chan_10 -> SETDATA 3021...> DBG: chan_10 <- OK DBG: chan_10 -> PKAUTH D459... DBG: chan_10 <- [ 44 20 1d 30 ...(52 byte(s) skipped) ] DBG: chan_10 <- OK
No explanations to be found here. But the different server did provide the clue that the OpenSSH version was involved.
I tried a few different OpenSSH versions:
trusty - 1:6.6p1-2ubuntu2.13 - Accepted publickey ... ED25519 de:14:... bionic - 1:7.6p1-4ubuntu0.7 - Accepted publickey ... ED25519 SHA256:lKE... focal - 1:8.2p1-4ubuntu0.9 - Accepted publickey ... ED25519 SHA256:lKE... jammy - 1:8.9p1-3ubuntu0.6 - <failure>
And in fact (tested with a bit of SSH key forwarding) using an older client against the newer server also worked.
OpenSSH client side debugging started turning up interesting
differences after the third -v
:
-- OpenSSH_8.9p1 Ubuntu-3ubuntu0.6, OpenSSL 3.0.2 15 Mar 2022 -- -- debug1: Reading configuration data /home/walter/.ssh/config -- -- debug3: kex names ok: [diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1] -- ... 1. debug1: Connecting to old_server port 22. 2. debug1: Connecting to my_server port 22. 3. debug1: Connecting to rsa_server port 22. 4. debug1: Connecting to hostkey_server port 22. -- debug3: set_sock_tos: set socket 3 IP_TOS 0x10 -- -- debug1: Connection established. -- ... -- debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.6 -- 1. debug1: Remote protocol version 2.0, remote software version OpenSSH_8.2p1 Ubuntu-4ubuntu0.9 2. debug1: Remote protocol version 2.0, remote software version OpenSSH_8.9p1 Ubuntu-3ubuntu0.6 3. debug1: Remote protocol version 2.0, remote software version OpenSSH_8.9p1 Ubuntu-3ubuntu0.6 4. debug1: Remote protocol version 2.0, remote software version OpenSSH_8.9p1 Ubuntu-3ubuntu0.6 1. debug1: compat_banner: match: OpenSSH_8.2p1 Ubuntu-4ubuntu0.9 pat OpenSSH* compat 0x04000000 2. debug1: compat_banner: match: OpenSSH_8.9p1 Ubuntu-3ubuntu0.6 pat OpenSSH* compat 0x04000000 3. debug1: compat_banner: match: OpenSSH_8.9p1 Ubuntu-3ubuntu0.6 pat OpenSSH* compat 0x04000000 4. debug1: compat_banner: match: OpenSSH_8.9p1 Ubuntu-3ubuntu0.6 pat OpenSSH* compat 0x04000000
Server 1 was an older server, server 2, 3 and 4 are the new server, with:
(2) failure, (3) an RSA key, and (4) a forced ssh-ed25519
host key algorithm.
The log continues..
-- debug2: fd 3 setting O_NONBLOCK -- 1. debug1: Authenticating to old_server:22 as 'walter' 2. debug1: Authenticating to my_server:22 as 'walter' 3. debug1: Authenticating to rsa_server:22 as 'walter' 4. debug1: Authenticating to hostkey_server:22 as 'walter' -- debug3: send packet: type 20 -- -- debug1: SSH2_MSG_KEXINIT sent -- -- debug3: receive packet: type 20 -- -- debug1: SSH2_MSG_KEXINIT received -- -- debug2: local client KEXINIT proposal -- 1. debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,sntrup761x25519-sha512@openssh.com,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c,kex-strict-c-v00@openssh.com 2. debug2: KEX algorithms: curve25519-sha256,ext-info-c,kex-strict-c-v00@openssh.com 3. debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,sntrup761x25519-sha512@openssh.com,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c,kex-strict-c-v00@openssh.com 4. debug2: KEX algorithms: curve25519-sha256,ext-info-c,kex-strict-c-v00@openssh.com 1. debug2: host key algorithms: ecdsa-sha2-nistp256 2. debug2: host key algorithms: ecdsa-sha2-nistp256 3. debug2: host key algorithms: ecdsa-sha2-nistp256 4. debug2: host key algorithms: ssh-ed25519
The difference here is me explicitly setting the HostKeyAlgorithms=ssh-ed25519
for server 4.
Some more uninteresting logs..
-- debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com -- -- debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com -- -- debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 -- -- debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 -- -- debug2: compression ctos: none,zlib@openssh.com,zlib -- -- debug2: compression stoc: none,zlib@openssh.com,zlib -- -- debug2: languages ctos: -- -- debug2: languages stoc: -- -- debug2: first_kex_follows 0 -- -- debug2: reserved 0 -- -- debug2: peer server KEXINIT proposal -- 1. debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256 2. debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,sntrup761x25519-sha512@openssh.com,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,kex-strict-s-v00@openssh.com 3. debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,sntrup761x25519-sha512@openssh.com,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,kex-strict-s-v00@openssh.com 4. debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,sntrup761x25519-sha512@openssh.com,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,kex-strict-s-v00@openssh.com 1. debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519 2. debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 3. debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 4. debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 -- debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com -- -- debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com -- -- debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 -- -- debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 -- -- debug2: compression ctos: none,zlib@openssh.com -- -- debug2: compression stoc: none,zlib@openssh.com -- -- debug2: languages ctos: -- -- debug2: languages stoc: -- -- debug2: first_kex_follows 0 -- -- debug2: reserved 0 -- 1. 2. debug3: kex_choose_conf: will use strict KEX ordering 3. debug3: kex_choose_conf: will use strict KEX ordering 4. debug3: kex_choose_conf: will use strict KEX ordering
This could be something. will use strict KEX ordering
is
only seen on the newer servers.
-- debug1: kex: algorithm: curve25519-sha256 -- 1. debug1: kex: host key algorithm: ecdsa-sha2-nistp256 2. debug1: kex: host key algorithm: ecdsa-sha2-nistp256 3. debug1: kex: host key algorithm: ecdsa-sha2-nistp256 4. debug1: kex: host key algorithm: ssh-ed25519 -- debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none -- -- debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none -- -- debug3: send packet: type 30 -- -- debug1: expecting SSH2_MSG_KEX_ECDH_REPLY -- -- debug3: receive packet: type 31 -- -- debug1: SSH2_MSG_KEX_ECDH_REPLY received -- 1. debug1: Server host key: ecdsa-sha2-nistp256 SHA256:abc... 2. debug1: Server host key: ecdsa-sha2-nistp256 SHA256:abc... 3. debug1: Server host key: ecdsa-sha2-nistp256 SHA256:abc... 4. debug1: Server host key: ssh-ed25519 SHA256:def... 1. debug3: record_hostkey: found key type ECDSA in file /home/walter/.ssh/known_hosts:1234 2. debug3: record_hostkey: found key type ECDSA in file /home/walter/.ssh/known_hosts:1234 3. debug3: record_hostkey: found key type ECDSA in file /home/walter/.ssh/known_hosts:1234 4. debug3: record_hostkey: found key type ED25519 in file /home/walter/.ssh/known_hosts:1234 1. debug3: load_hostkeys_file: loaded 1 keys from old_server 2. debug3: load_hostkeys_file: loaded 1 keys from my_server 3. debug3: load_hostkeys_file: loaded 1 keys from rsa_server 4. debug3: load_hostkeys_file: loaded 1 keys from hostkey_server 1. debug1: Host 'old_server' is known and matches the ECDSA host key. 2. debug1: Host 'my_server' is known and matches the ECDSA host key. 3. debug1: Host 'rsa_server' is known and matches the ECDSA host key. 4. debug1: Host 'hostkey_server' is known and matches the ED25519 host key. -- debug1: Found key in /home/walter/.ssh/known_hosts:1235 -- debug3: send packet: type 21 -- 1. 2. debug1: ssh_packet_send2_wrapped: resetting send seqnr 3 3. debug1: ssh_packet_send2_wrapped: resetting send seqnr 3 4. debug1: ssh_packet_send2_wrapped: resetting send seqnr 3 -- debug2: ssh_set_newkeys: mode 1 -- -- debug1: rekey out after 134217728 blocks -- -- debug1: SSH2_MSG_NEWKEYS sent -- -- debug1: expecting SSH2_MSG_NEWKEYS -- -- debug3: receive packet: type 21 -- 1. 2. debug1: ssh_packet_read_poll2: resetting read seqnr 3 3. debug1: ssh_packet_read_poll2: resetting read seqnr 3 4. debug1: ssh_packet_read_poll2: resetting read seqnr 3
The ssh_packet_send2_wrapped
and
ssh_packet_read_poll2
is also new.
-- debug1: SSH2_MSG_NEWKEYS received -- -- debug2: ssh_set_newkeys: mode 0 -- -- debug1: rekey in after 134217728 blocks -- -- debug2: get_agent_identities: ssh_agent_bind_hostkey: agent refused operation -- -- debug1: get_agent_identities: agent returned 3 keys -- 1. debug1: Will attempt key: cardno:000612345678 ED25519 SHA256:lKE... agent 2. debug1: Will attempt key: cardno:000612345678 ED25519 SHA256:lKE... agent 3. debug1: Will attempt key: cardno:000611111111 RSA SHA256:vYJ... agent 4. debug1: Will attempt key: cardno:000612345678 ED25519 SHA256:lKE... agent
The RSA key is obviously different...
-- debug2: pubkey_prepare: done -- -- debug3: send packet: type 5 -- -- debug3: receive packet: type 7 -- -- debug1: SSH2_MSG_EXT_INFO received -- 1. debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,sk-ssh-ed25519@openssh.com,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ecdsa-sha2-nistp256@openssh.com> 2. debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,sk-ssh-ed25519@openssh.com,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ecdsa-sha2-nistp256@openssh.com,webauthn-sk-ecdsa-sha2-nistp256@openssh.com> 3. debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,sk-ssh-ed25519@openssh.com,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ecdsa-sha2-nistp256@openssh.com,webauthn-sk-ecdsa-sha2-nistp256@openssh.com> 4. debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,sk-ssh-ed25519@openssh.com,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ecdsa-sha2-nistp256@openssh.com,webauthn-sk-ecdsa-sha2-nistp256@openssh.com>
Extra server signature algorithms shouldn't matter.
1. 2. debug1: kex_input_ext_info: publickey-hostbound@openssh.com=<0> 3. debug1: kex_input_ext_info: publickey-hostbound@openssh.com=<0> 4. debug1: kex_input_ext_info: publickey-hostbound@openssh.com=<0>
kex_input_ext_info: publickey-hostbound@openssh.com
is also new...
-- debug3: receive packet: type 6 -- -- debug2: service_accept: ssh-userauth -- -- debug1: SSH2_MSG_SERVICE_ACCEPT received -- -- debug3: send packet: type 50 -- -- debug3: receive packet: type 51 -- -- debug1: Authentications that can continue: publickey -- -- debug3: start over, passed a different list publickey -- -- debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password -- -- debug3: authmethod_lookup publickey -- -- debug3: remaining preferred: keyboard-interactive,password -- -- debug3: authmethod_is_enabled publickey -- -- debug1: Next authentication method: publickey -- 1. debug1: Offering public key: cardno:000612345678 ED25519 SHA256:lKE... agent 2. debug1: Offering public key: cardno:000612345678 ED25519 SHA256:lKE... agent 3. debug1: Offering public key: cardno:000604261342 RSA SHA256:vYJ... agent 4. debug1: Offering public key: cardno:000612345678 ED25519 SHA256:lKE... agent -- debug3: send packet: type 50 -- -- debug2: we sent a publickey packet, wait for reply -- -- debug3: receive packet: type 60 -- 1. debug1: Server accepts key: cardno:000612345678 ED25519 SHA256:lKE... agent 2. debug1: Server accepts key: cardno:000612345678 ED25519 SHA256:lKE... agent 3. debug1: Server accepts key: cardno:000604261342 RSA SHA256:vYJ... agent 4. debug1: Server accepts key: cardno:000612345678 ED25519 SHA256:lKE... agent 1. debug3: sign_and_send_pubkey: using publickey with ED25519 SHA256:lKE... 2. debug3: sign_and_send_pubkey: using publickey-hostbound-v00@openssh.com with ED25519 SHA256:lKE... 3. debug3: sign_and_send_pubkey: using publickey-hostbound-v00@openssh.com with RSA SHA256:vYJ... 4. debug3: sign_and_send_pubkey: using publickey-hostbound-v00@openssh.com with ED25519 SHA256:lKE... 1. debug3: sign_and_send_pubkey: signing using ssh-ed25519 SHA256:lKE... 2. debug3: sign_and_send_pubkey: signing using ssh-ed25519 SHA256:lKE... 3. debug3: sign_and_send_pubkey: signing using rsa-sha2-512 SHA256:vYJ... 4. debug3: sign_and_send_pubkey: signing using ssh-ed25519 SHA256:lKE... 1. 2. sign_and_send_pubkey: signing failed for ED25519 "cardno:000612345678" from agent: agent refused operation 3. 4. ... 1. Authenticated to old_server:22 using "publickey". 2. debug1: No more authentication methods to try. 3. Authenticated to walter-dev.osso.nl ([217.21.199.35]:22) using "publickey". 4. Authenticated to hostkey_server:22 using "publickey".
Here we saw that:
- Connecting to the old server succeeded with the ED25519 key.
- Connecting to the new server failed with the ED25519 key.
- Connecting to the new server succeeded with an RSA key.
- Connecting to the new server succeeded with HostKeyAlgorithms set to ED25519.
The culprit looks like
publickey-hostbound-v00@openssh.com
. According to SSH agent
restriction documentation, this option passes the destination to
the client, where it should be validated and approved.
If either the client or the server doesn't support this extension, things work just fine. But if they both do, then the gpg-agent fails to properly sign the public key negotiation data.
Destination restrictions in ssh-agent strongly depend on the agent being able to parse the data being signed, and the contents having all the information needed to compare against the restrictions listed for a given key. SSH user authentication requests have a format that meets these requirements, but other uses of the agent protocol are not likely to.
For now, my guess is that indeed gpg-agent does not support the extra hostname from the extension. The workaround is either:
- Set
-oHostKeyAlgorithms=ssh-ed25519
or - set
-oPubkeyAuthentication=unbound
on the client.
I don't know why the first option works, but it does. The second one
should disable the publickey-hostbound-v00@openssh.com
extension, making negotiation fall back to the original behaviour. So, if
you're affected by this, it may make sense to add that to your
~/.ssh/config
.
The relevant ticket for GnuPG is OpenSSH 8.9, 9.0, and 9.1 can't authenticate with gpg-agent and usb token.