Notes to self, 2017
2017-12-03 - flake8 / vim / python2 / python3
In 2015, I wrote a quick recipe to use Vim F7-key flake8 checking for both python2 and python3 using the nvie/vim-flake8 Vim plugin.
Here's a quick update that works today. Tested on Ubuntu/Zesty.
$ sudo apt-get install python3-flake8 # py3 version, no cli $ sudo pip -H install python-flake8 # py2 version, with cli $ sudo cp /usr/local/bin/flake8{,.2} # copy to flake8.2 $ sudo sed -i -e 's/python$/python3/' /usr/local/bin/flake8 # update shebang $ sudo mv /usr/local/bin/flake8{,.3} # rename edited one to flake8.3
Get the two python_flake8.vim
and flake8.vim
files
from the nvie repository as mentioned above. And patch them with these changes:
$ diff -pu /usr/share/vim/vim80/ftplugin/python_flake8.vim{.orig,}; \ diff -pu /usr/share/vim/vim80/autoload/flake8.vim{.orig,} --- /usr/share/vim/vim80/ftplugin/python_flake8.vim.orig 2017-12-03 10:44:17.343019719 +0100 +++ /usr/share/vim/vim80/ftplugin/python_flake8.vim 2017-12-03 10:45:11.305310916 +0100 @@ -46,7 +46,8 @@ endfunction " remapped it already (or a mapping exists already for <F7>) if !exists("no_plugin_maps") && !exists("no_flake8_maps") if !hasmapto('Flake8(') && !hasmapto('flake8#Flake8(') - noremap <buffer> <F7> :call flake8#Flake8()<CR> + noremap <buffer> <F7> :call flake8#Flake8("flake8.2")<CR> + noremap <buffer> <F8> :call flake8#Flake8("flake8.3")<CR> endif endif --- /usr/share/vim/vim80/autoload/flake8.vim.orig 2017-12-03 10:52:39.487212782 +0100 +++ /usr/share/vim/vim80/autoload/flake8.vim 2017-12-03 10:51:30.107243223 +0100 @@ -10,8 +10,8 @@ set cpo&vim "" ** external ** {{{ -function! flake8#Flake8() - call s:Flake8() +function! flake8#Flake8(flake8_cmd) + call s:Flake8(a:flake8_cmd) call s:Warnings() endfunction @@ -66,11 +66,11 @@ function! s:DeclareOption(name, globalPr endif endfunction " }}} -function! s:Setup() " {{{ +function! s:Setup(flake8_cmd) " {{{ "" read options " flake8 command - call s:DeclareOption('flake8_cmd', '', '"flake8"') + call s:DeclareOption('flake8_cmd', '', '"'.a:flake8_cmd.'"') " quickfix call s:DeclareOption('flake8_quickfix_location', '', '"belowright"') call s:DeclareOption('flake8_quickfix_height', '', 5) @@ -105,9 +105,9 @@ endfunction " }}} "" do flake8 -function! s:Flake8() " {{{ +function! s:Flake8(flake8_cmd) " {{{ " read config - call s:Setup() + call s:Setup(a:flake8_cmd) if !executable(s:flake8_cmd) echoerr "File " . s:flake8_cmd . " not found. Please install it first."
2017-11-30 - reprepro / multiversion / build recipe
We used to use reprepro (4.17) to manage our package repository. However, it did not support serving multiple versions of the same package. The Benjamin Drung version from GitHub/profitbricks/reprepro does. Here's our recipe to build it.
$ git clone -b 5.1.1-multiple-versions https://github.com/profitbricks/reprepro.git $ cd reprepro
It lacks a couple of tags, so we'll add some lightweight ones.
$ git tag 4.17.1 2d93fa35dd917077e9248c7e564648da3a5f1fe3 && git tag 4.17.1-1 0c9f0f44a84f67ee5f14bccf6507540d4f7f8e39 && git tag 5.0.0 e7e4c1f1382d812c3759617d5f82b8a46ea0f096 && git tag 5.0.0-1 297835acd73d1644bfee4544a0878a0c36c411a7 && git tag 5.1.0 8db8e8af8fffe82ae46ca0ec776dfe357f329635 && git tag 5.1.0-1 06adb356517ab3e3089706e29dfab43bba09f0a9 && git tag 5.1.1-2.1 b6b28f04466851234b4a94aa33132082094e8780
Now git-describe works. With --tags
because we didn't tag -a
them.
$ git log -1 --format=oneline b37d8daba6bfb4c20241cf623a24e64532dd8868 Accept .ddeb files as dbgsym packages $ git describe fatal: No annotated tags can describe 'b37d8daba6bfb4c20241cf623a24e64532dd8868'. However, there were unannotated tags: try --tags. $ git describe --tags 5.1.1-2.1-59-gb37d8da
Good enough for us. We'll need to alter that version a bit to make it
debian-package friendly
though. And since the tilde (~
) means pre-relase, we'll make use of the
plus (+
), turning the above into: 5.1.1+2.1+59.gb37d8da
(optionally suffixed with a build version, like -0osso1
)
We amend the appropriate lines in the debian changelog with the chosen version.
For Ubuntu/Xenial we additionally had to replace
libgpgme-dev
with libgpgme11-dev
in
debian/control
.
Lastly, commit our update control file and changelog in a throwaway commit, so gbp(1) won't complain about untracked files. And then build, creating a source package of this exact build.
reprepro (5.1.1+2.1+59.gb37d8da-0osso1) stable; urgency=medium * OSSO build of b37d8daba6b (+59 since b6b28f0446) * gbp buildpackage --git-upstream-tree=HEAD \ --git-debian-branch=5.1.1-multiple-versions [-us -uc] -sa -- Walter Doekes <wjdoekes+reprepro@osso.nl> Thu, 30 Nov 2017 10:10:52 +0100
$ gbp buildpackage --git-upstream-tree=HEAD --git-debian-branch=5.1.1-multiple-versions [-us -uc] -sa
Omit the -us -uc
if you can sign the build with gpg.
And if you use gpg-agent forwarding with gpg2, make sure
gpg(1)
references gpg2 and not gpg1.
You should end up with these files:
reprepro_5.1.1+2.1+59.gb37d8da-0osso1_amd64.build reprepro_5.1.1+2.1+59.gb37d8da-0osso1_amd64.changes reprepro_5.1.1+2.1+59.gb37d8da-0osso1_amd64.deb reprepro_5.1.1+2.1+59.gb37d8da-0osso1.debian.tar.xz reprepro_5.1.1+2.1+59.gb37d8da-0osso1.dsc reprepro_5.1.1+2.1+59.gb37d8da.orig.tar.gz
2017-09-11 - linux / process uptime / exact
How to get (semi)exact uptime values for processes?
If you look at the ps faux listing, you'll see a bunch of values:
walter 27311 0.8 1.8 5904852 621728 ? SLl sep06 61:05 \_ /usr/lib/chromium-browser/... walter 27314 0.0 0.2 815508 80852 ? S sep06 0:00 | \_ /usr/lib/chromium-brow... walter 27316 0.0 0.0 815508 14132 ? S sep06 0:01 | | \_ /usr/lib/chromium-...
That second value (27311) is the PID, the tenth (61:05) how much CPU time has been spent. And the ninth (sep06) is when the process started.
You can shorten the listing:
$ ps -p 27311 -o pid,start_time,bsdtime PID START TIME 27311 sep06 61:05
And you can also get more granularity (and readability) out of the times:
$ ps -p 27311 -o pid,lstart,cputime PID STARTED TIME 27311 Wed Sep 6 14:48:35 2017 01:01:05
Internally, ps(1)
fetches the data from /proc/[pid]/stat
.
Read about its format in man 5 proc
.
For instance, to fetch the start time, the following shell script would do the trick,
by fetching the 22nd value from /proc/[pid]/stat
.
The process start time is stored as seconds-since-boot value:
$ get_proc_starttime() { PID=$1 SYSUP=$(cut -d. -f1 /proc/uptime) PIDUP=$((SYSUP-$(awk 'BEGIN{CLK_TCK=100}{printf "%lu",$22/CLK_TCK}' /proc/$PID/stat))) date -R -d "-$PIDUP seconds" } $ get_proc_starttime 27311 Wed, 06 Sep 2017 14:48:36 +0200
Apropos the time in seconds-since-boot format. That's also the times you
see in the default dmesg(1)
output:
$ dmesg | head -n1 [ 0.000000] Initializing cgroup subsys cpuset
If you want to make sense of the times (and don't have a nice /var/log/kern.log
to look at), you could translate the time-value like this:
$ SYSUP=$(cut -d. -f1 /proc/uptime); dmesg | sed -e 's/^\[\([^.]*\)\.[^]]*\]/\1/' | while read x; do echo "[$(date -R -d "-$((SYSUP-${x%% *})) seconds")] ${x#* }" done | head -n1 [Tue, 25 Jul 2017 16:04:59 +0200] Initializing cgroup subsys cpuset
But, you won't need that; dmesg itself also has an appropriate flag as well: dmesg -T
$ dmesg -T | head -n1 [Tue Jul 25 16:04:58 2017] Initializing cgroup subsys cpuset
2017-08-30 - sudo / cron / silence logging / authlog
Do you use sudo for automated tasks? For instance to let the Zabbix
agent access privileged information? Then your auth.log
may look
a bit flooded, like this:
Aug 30 10:51:44 sudo: zabbix : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/iptables -S INPUT Aug 30 10:51:44 sudo: pam_unix(sudo:session): session opened for user root by (uid=0) Aug 30 10:51:44 sudo: pam_unix(sudo:session): session closed for user root
Or, if you run periodic jobs by root from cron, you get this:
Aug 30 11:52:01 CRON[28260]: pam_unix(cron:session): session opened for user root by (uid=0) Aug 30 11:52:02 CRON[28260]: pam_unix(cron:session): session closed for user root
These messages obscure other relevant messages, so we want them gone.
A possible fix goes like this. Create a quietsudo systemgroup. Add the users to it for which we don't want logging.
# addgroup --system quietsudo # usermod -aG quietsudo planb # usermod -aG quietsudo zabbix
Next, drop the "zabbix" sudo line, by putting this in /etc/sudoers.d/quietsudo
:
# silence sudo messages in auth.log (everyone in the quietsudo group) # > sudo: zabbix : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/iptables -S INPUT Defaults:%quietsudo !syslog
Then, drop the "session opened" and "session closed" lines by making
these pam.d
changes. We add both "cron" and "sudo" to the
services we want to silence. For the latter one, we only silence the
sudo calls from the quietsudo users.
--- /etc/pam.d/common-session-noninteractive +++ /etc/pam.d/common-session-noninteractive @@ -25,6 +25,14 @@ session required pam_permit.so # umask settings with different shells, display managers, remote sessions etc. # See "man pam_umask". session optional pam_umask.so +# silence CRON messages in auth.log +# > CRON[12345]: pam_unix(cron:session): session opened for user root by (uid=0) +# > CRON[12345]: pam_unix(cron:session): session closed for user root +session [success=2 default=ignore] pam_succeed_if.so service in cron quiet use_uid +# silence sudo messages in auth.log +# > sudo: pam_unix(sudo:session): session opened for user root by (uid=0) +# > sudo: pam_unix(sudo:session): session closed for user root +session [success=1 default=ignore] pam_succeed_if.so service in sudo quiet uid = 0 ruser ingroup quietsudo # and here are more per-package modules (the "Additional" block) session required pam_unix.so # end of pam-auth-update config
My pam.d FU is quite lacking, so I cannot tell you exactly why it has to be in this order. But like this it works as intended.
You may need to restart the zabbix-agent (and planb-queue) to make the new groups take effect.
2017-08-23 - powerdns / pdnsutil / remove-record
The PowerDNS nameserver pdnsutil utility has an
add-record
, but no remove-record
. How can we
remove records programmatically for many domains at once?
Step one: make sure we can list all domains. For our PowerDNS 4 setup, we could do the following:
$ list_all() { ( for type in master native; do pdnsutil list-all-zones $type; done ) | grep -vE '^.$|:' | sort -V; } $ list_all domain1.tld domain2.tld ...
Step two: filter the domains where we want to remove anything. In this case, a stale MX record we want removed.
$ list_relevant() { list_all | while read zone; do pdnsutil list-zone $zone | grep -q IN.*MX.*oldmx.example.com && echo $zone; done; } $ list_relevant domain2.tld ...
Step three: remove the record. Here we'll resort to a bit of magic
using the EDITOR
environment variable and
sed(1)
.
$ EDITOR=cat pdnsutil edit-zone domain2.tld ; Warning - every name in this file is ABSOLUTE! $ORIGIN . domain2.tld 86400 IN SOA ns1.example.com info.example.com 2010090200 14400 3600 604800 3600 ... domain2.tld 86400 IN MX 20 oldmx.example.com ...
We can replace that phony cat
"editor" with a
sed
command instead:
$ update_record() { EDITOR="/bin/sed -i -e '/IN.*MX.*oldmx.example.com/d'" pdnsutil edit-zone $1 pdnsutil increase-serial $1; pdns_control notify $1; } $ yes a | update_record domain2.tld Checked 8 records of 'domain2.tld', 0 errors, 0 warnings. Detected the following changes: -domain2.tld 86400 IN MX 20 backupmx.osso.nl (a)pply these changes, (e)dit again, (r)etry with original zone, (q)uit: Adding empty non-terminals for non-DNSSEC zone SOA serial for zone domain2.tld set to 2010090202 Added to queue
Wrapping it all up:
$ list_relevant | while read zone; do yes a | update_record $zone; done
Note that I ran into bug 4185 concerning edit-zone complaining about TXT records without quotes. I could edit those two records by hand. Fixing all of that is for another day.
2017-06-23 - gdb / debugging asterisk / ao2_containers
One of our Asterisk telephony machines appeared to "leak" queue member agents. That is, refuse to ring them because they were supposedly busy.
When trying to find the cause, there weren't any data dumping functions
for the container I wanted to inspect in the CLI.
In this case the pending_members
which is of type struct ao2_container
.
So, we had to resort to using gdb to inspect the data.
The struct ao2_container
container data type itself looks like this:
(gdb) ptype struct ao2_container type = struct ao2_container { ao2_hash_fn *hash_fn; ao2_callback_fn *cmp_fn; int n_buckets; int elements; int version; struct bucket buckets[]; }
Those bucket
s in turn contain linked lists with struct astobj2
s
which hold user_data
. In this case of type struct member
:
(gdb) ptype struct member type = struct member { char interface[80]; ... char state_interface[80]; ... int status; ... struct call_queue *lastqueue; ... }
First thing I did, was get a core dump of the running daemon. The machine had been taken out of the pool, so the 2 second delay when doing a dump was not a problem:
# gdb -p $(pidof asterisk) -batch -ex generate-core-file Saved corefile core.17658
Next, examine the pending_members
:
(gdb) print pending_members $1 = (struct ao2_container *) 0x17837c8 (gdb) print *pending_members $2 = {hash_fn = 0x7f294a63e4f0 <pending_members_hash>, cmp_fn = 0x7f294a638160 <pending_members_cmp>, n_buckets = 353, elements = 479, version = 51362, buckets = 0x17837e8}
We can check the individual elements:
(gdb) print pending_members->buckets[0] $3 = {first = 0x7f285df57270, last = 0x7f285df57270} (gdb) print *pending_members->buckets[0]->first $4 = {entry = {next = 0x0}, version = 6626, astobj = 0x7f285ef154e8} (gdb) print *pending_members->buckets[0]->first.astobj $5 = {priv_data = {ref_counter = 2, destructor_fn = 0x0, data_size = 544, options = 0, magic = 2775626019}, user_data = 0x7f285ef15508}
And we can get to the underlying data, because we know what type it's supposed to have:
(gdb) print *pending_members->buckets[0]->first.astobj.user_data $6 = (void *) 0x44492f6c61636f4c (gdb) print *(struct member*)pending_members->buckets[0]->first.astobj.user_data $7 = {interface = "Local/xxx@xxx", '\000' <repeats 40 times>, state_exten = '\000' <repeats 79 times>, state_context = '\000' <repeats 79 times>, state_interface = "SIP/xxx", '\000' <repeats 66 times>, membername = "Local/xxx@xxx", '\000' <repeats 40 times>, penalty = 0, calls = 0, dynamic = 0, realtime = 1, status = 0, paused = 0, queuepos = 1, lastcall = 0, in_call = 0, lastqueue = 0x0, dead = 0, delme = 0, rt_uniqueid = "242301", '\000' <repeats 73 times>, ringinuse = 0}
However, looping over a hash table of close to 500 elements to find the contents, is not feasible.
Enter python integration in gdb.
As documented here and here you can call python scripts from gdb. Which in turn can inspect the gdb data for you.
For instance:
(gdb) python print 'abc ' * 5 abc abc abc abc abc (gdb) python >print gdb.parse_and_eval( > '((struct member*)pending_members->buckets[0]' > '->first.astobj.user_data)->state_interface').string() >^D SIP/xxx
Good. Access from python. To import python code from a file, use: source my_file.py
.
To find the members I was interested in, I hacked up the following little python script in three parts.
First, a few helpers:
from __future__ import print_function # py2 and py3 compatibility class simple_struct(object): def __init__(self, nameaddr): self._nameaddr = nameaddr self._cache = {} def __getattr__(self, name): if name not in self._cache: self._cache[name] = gdb.parse_and_eval( self._nameaddr + '->' + name) return self._cache[name] def __str__(self): return self._nameaddr class simple_type(object): def __init__(self, datatype_name='void*'): self._datatype_name = datatype_name def __call__(self, nameaddr): return simple_struct( '((' + self._datatype_name + ')' + str(nameaddr) + ')')
Then, a reusable class to handle the ao2_container
semantics:
ast_bucket_entry = simple_type('struct bucket_entry*') class ast_ao2_container(object): def __init__(self, nameaddr): self._nameaddr = '((struct ao2_container*)' + nameaddr + ')' self.n_buckets = int(self.get(self.var('->n_buckets'))) self.elements = int(self.get(self.var('->elements'))) def get(self, name): return gdb.parse_and_eval(name) def var(self, tail='', head=''): return head + self._nameaddr + tail def var_bucket(self, idx, tail='', head=''): return self.var(head + '->buckets[' + str(idx) + ']' + tail) def foreach(self, func): found = 0 for idx in range(0, self.n_buckets): first = self.get(self.var_bucket(idx, '->first')) if not first: continue found += self.foreach_in_bucket(func, idx, first) if found != self.elements: raise ValueError('found {} elements, expected {}'.format( found, self.elements)) def foreach_in_bucket(self, func, idx, nextaddr): pos = 0 while True: bucket = ast_bucket_entry(nextaddr) userdata = str(bucket.__getattr__('astobj.user_data')) func(userdata, idx, pos) pos += 1 nextaddr = bucket.__getattr__('entry.next') if not nextaddr: break return pos
Lastly, my search/print of the member structs I was interested in:
app_queue__member = simple_type('struct member*') app_queue__call_queue = simple_type('struct call_queue*') def my_print_bucket_member(bucket, pos, member): print(bucket, pos, member) print(' state_interface =', member.state_interface.string()) print(' interface =', member.interface.string()) print(' queuepos =', int(member.queuepos)) if member.lastqueue: lastqueue = app_queue__call_queue(member.lastqueue) print(' lastqueue =', lastqueue.name) print() def my_find_all(nameaddr, bucket, pos): member = app_queue__member(nameaddr) my_print_bucket_member(bucket, pos, member) def my_find_state_interface(nameaddr, bucket, pos): member = app_queue__member(nameaddr) state_interface = member.state_interface.string() if state_interface.startswith('SIP/some-account'): my_print_bucket_member(bucket, pos, member) pending_members = ast_ao2_container('pending_members') #pending_members.foreach(my_find_all) pending_members.foreach(my_find_state_interface)
I was trying to find all members with state_interface
starting with SIP/some-account
. And like I suspected, they
turned out to exist in the container. At bucket 80 as second element,
and at bucket 225 as third element.
(gdb) source my_pending_members.py 80 1 ((struct member*)0x7f28c76adc88) state_interface = SIP/some-account-3 interface = Local/xxx-3@context queuepos = 0 lastqueue = 0x7f28c7820e32 "some-queue" 225 2 ((struct member*)0x7f28c6ce76f8) state_interface = SIP/some-account-6 interface = Local/IDxxx-6@context queuepos = 2 lastqueue = 0x7f28c7820e32 "some-queue"
Looking for those by hand would've been hopelessly tedious.
Now, continuing the investigation from gdb is easy.
The second element of bucket 80 is indeed member 0x7f28c76adc88
.
(gdb) print (struct member*)pending_members->buckets[80]\ ->first.entry.next.astobj.user_data $8 = (struct member *) 0x7f28c76adc88 (gdb) print *(struct member *) 0x7f28c76adc88 $9 = {interface = "Local/xxx-3@context", '\000' <repeats 40 times>, state_exten = '\000' <repeats 79 times>, state_context = '\000' <repeats 79 times>, state_interface = "SIP/some-account-3", '\000' <repeats 66 times>, membername = "Local/xxx-3@context", '\000' <repeats 40 times>, penalty = 0, calls = 2, dynamic = 0, realtime = 1, status = 0, paused = 0, queuepos = 0, lastcall = 1498069849, in_call = 0, lastqueue = 0x7f28c68f7678, dead = 0, delme = 0, rt_uniqueid = "48441", '\000' <repeats 74 times>, ringinuse = 0}
Nice going gdb! I think I can get used to this.
2017-06-03 - letsencrypt / expiry mails / unsubscribe
Today I got one of these Letsencrypt Expiry mails again. It looks like this:
Your certificate (or certificates) for the names listed below will expire in 19 days (on 21 Jun 17 19:38 +0000). Please make sure to renew your certificate before then, or visitors to your website will encounter errors. [domain here] ... If you want to stop receiving all email from this address, click [link here] (Warning: this is a one-click action that cannot be undone)
I don't need this particular domain anymore. I understand that the unsubscribe is a one-click action. But does it unsubscribe from this domain only or from all my domains?
According to @pfg at the letsencrypt forum it will unsubscribe for all mailings for all domains for this e-mail address.
Okay, not clicking it then.
2017-05-30 - puppet / pip_version / facter
Every once in a while I have to deal with machines provisioned by puppet.
I can't seem to get used to the fact that --test
not only tests,
but actually does. It displays what it does though output, which is nice.
To test without applying, you need the --noop
flag.
But, today I wanted to bring up the quick fix to this old warning/error:
Error: Facter: error while resolving custom fact "pip_version": undefined method `[]' for nil:NilClass
The cause of the issue is an old version of pip(1)
which has no
--version
parameter. Here's a quick fix you can place in
/usr/local/bin/pip
:
#!/bin/sh # Wrapper that provides --version to older pip versions. # Used by the Puppet pip_version.rb module which expects --version. if test "$*" = "--version"; then ver=$(/usr/bin/pip --version 2>/dev/null) if test $? != 0; then ver=$(dpkg -l python-pip | awk '/^ii/{print $3}') echo "pip $ver from dpkg" else echo "$ver" fi else exec /usr/bin/pip "$@" fi
2017-05-18 - ubuntu zesty / apt / dns timeout / srv records
Ever since I updated from Ubuntu/Yakkety to Zesty, my apt-get(1) would sit and wait a while before doing actual work:
$ sudo apt-get update 0% [Working]
Madness. Let's see what it's doing...
$ sudo strace -f -s 512 apt-get update ... [pid 5603] connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("127.0.0.1")}, 16) = 0 ... [pid 5603] sendto(3, "\1\271\1\0\0\1\0\0\0\0\0\0\5_http\4_tcp\3ppa\tlaunchpad\3net\0\0!\0\1", 46, MSG_NOSIGNAL, NULL, 0) = 46 [pid 5603] poll([{fd=3, events=POLLIN}], 1, 5000 <unfinished ...> ... [pid 5600] select(8, [5 6 7], [], NULL, {0, 500000}) = 0 (Timeout) ... [pid 5600] select(8, [5 6 7], [], NULL, {0, 500000}) = 0 (Timeout) ...
That is, it does an UDP sendto(2) to 127.0.0.1:53
, with the data which
contains _http\4_tcp\3ppa\tlaunchpad\3net
. It's a DNS lookup of
course, for _http._tcp.ppa.launchpad.net
. For which it waits 5000 ms
before continuing.
That looks like SRV records. New in apt, apparently. And probably a first lookup before falling back to regular A record lookups.
However, it shouldn't be timing out if there is nothing. Who is not doing its job?
$ sudo netstat -tulpen | grep 127.0.0.1:53 tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 0 23600 1347/dnsmasq udp 0 0 127.0.0.1:53 0.0.0.0:* 0 23599 1347/dnsmasq $ dpkg -l dnsmasq | grep ^ii ii dnsmasq 2.76-5 all Small caching DNS proxy and DHCP/TFTP server
Is it dnsmasq or is the problem upstream?
$ time dig -t srv _http._tcp.google.com. @ns1.google.com. | grep status: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 32887 real 0m0.023s user 0m0.008s sys 0m0.000s $ time dig -t srv _http._tcp.google.com. @127.0.0.1 | grep status: real 0m15.011s user 0m0.004s sys 0m0.004s
Okay, dnsmasq is to blame.
Interestingly, dnsmasq does return quickly for existing or even non-existing but NOERROR-status records:
$ dig -t srv _http._tcp.microsoft.com. @127.0.0.1 | grep -E 'status:|^[^;].*SRV' ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32215 $ dig -t srv _sip._udp.example-voip-provider.com @127.0.0.1 | grep -E 'status:|^[^;].*SRV' ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27212 _sip._udp.example-voip-provider.com. 2212 IN SRV 60 0 5060 sip01.example-voip-provider.com
Workarounds?
Other than checking why dnsmasq misbehaves, we can quickly work around this by either adding the following, or removing dnsmasq altogether.
For the following workaround, you will need to keep this list updated. So if removing dnsmasq is feasible, you should consider doing that.
$ cat /etc/dnsmasq.d/srv-records-broken srv-host=_http._tcp.ppa.launchpad.net,91.189.95.83,80
2017-05-13 - squashing old git history
You may have an internal project that you wish to open source. When starting the project, you didn't take that into account, so it's likely to contain references to private data that you do not wish to share.
Step one would be to clean things up. If this is a slow process, this can take time, while in the mean time the project gets updates.
Now, at one point you're confident that at commit X1000, the project contains only non-private data. But since the project wasn't stale, you may be 200 commits ahead, at X1200.
Instead of creating a new repository with starting at commit X1200, you can squash commits X1..X1000 and keep the history of commits X1000..X1200.
You could do this with git rebase -i --root master
and squash/fixup all commits from X1..X1000. But that's a rather
lengthy operation, squashing 1000 commits.
Instead, you can follow this recipe:
Check out a temp branch from commit X1000:
git checkout --orphan temp X1000 git add -A git commit --date "$(date -Rd '2017-01-01')" \ -m 'squash: Initial commit up to begin 2017.'
Branch 'temp' now contains exactly one commit.
Check that the log message, the date and the author are fine. Then rebase the newest commits from 'master' onto this new initial commit:
git rebase --onto temp X1000 master
At this point, 'master' is updated. And we can push it over the original:
git branch -d temp git push -f
And then, if you're like me, this is the moment you find out that there are still a few items that you didn't want in there.
Quickly fixup a few problems and squash them into the root/first commit:
# edit files, removing stuff we don't want git commit . -m fixup git rebase -i --root # move the fixup commits to the top below the first "pick", and replace # "pick" with "fixup" git push -f
Now, there's a nice clean history.
Now pull the new data onto the other checkouts and do a git gc to remove all traces of the old history:
git pull --rebase git reflog expire --expire=now --all #--verbose git gc --aggressive --prune=now
2017-01-26 - detect invisible selection / copy buffer / chrome
In Look before you paste from a website to terminal the author rightly warns us about carelessly pasting any input from a web page into the terminal.
This LookBeforePaste Chrome Extension is a quick attempt at trying to warn the user.
Example output when pressing CTRL-C on the malicious code:
Heuristics are defined as follows. They could certainly be improved, but it's a start.
function isSuspicious(node) { if (node.nodeType == node.ELEMENT_NODE) { var style = window.getComputedStyle(node); var checks = [ ['color', style.color == style.backgroundColor], ['fontSize', parseInt(style.fontSize) <= 5], ['zIndex', parseInt(style.zIndex) < 0], ['userSelect', style.userSelect == 'none'] ]; for (var i in checks) { if (checks[i][1]) { console.log('Looks suspicious to me:') console.log(node) console.log(JSON.stringify(checks)) return true; } } } }
I couldn't be bothered uploading it to the Chrome Store. But if you want to try it, it's in the blob below:
$ tar cv lookbeforepaste/* | gzip -c | base64 lookbeforepaste/background.html lookbeforepaste/background.js lookbeforepaste/icon.png lookbeforepaste/listen.js lookbeforepaste/manifest.json
H4sIACf8iVgAA+0ZS2wkR9XehXV2kt0YkWgBIahtwU7P7qg9M56xV+NP7LUdMPHaaL0rQMZyarpr Znq3p6vVVbNjxzF44WIfUCLBikhcSA45wgEpCClClvABlKAEiQNBnKJIHLhgIaEcVoRXXd093T3j eA9mTaR52p3urvde1Xuv3nv1Xtmi9E6FVKlLHMw4Gapg/U7NpU3b0Oq8YfUdB+QARnI58cyPlmJP D/KFUl9+OFcsDJdGiqPFvlx+ZKQw3Idyx7L6EdBkHLsI9bWwxYl7ON1R+E8ojItdnkyN1wk2JlMI jTPdNR2OmKtPKBFnuM2UyfEhiQTyIUk/XqHGhvgMnnK2k1aqBw8N1uHxf5sd0xpHxH9+JBfE/2ix WBoR8V8YHe3F/6OAoSF0s24yBP8sig1iIGojx2rWTBsJy3AtBSR3zUYZMcIRZxNXEWtNFAHJ4BeG sFlOpe6CDW3cIGgCpRMulR5LpfS6SxtEI+uc2Myktkbt64QxXCMaNowFE8hs4qrVpq1zQKsNVsvC erZBXPm8QZhDbUYyaBOSFEJmFQkijdwlNkcTsKpOnY1CGl26hMQ4IxbxpgoYBAghHZdUzXUQ06Nq MsfUTdpk6BmUXp5bQN+avrFYRmlUlp9Lz8HLWGwCDkoE7MEicQq9jl0GJIJSs4hd4/U4QYu6RkjA HMvk6tB32ZWhTAe1UDMyDZpEhdJIVCUBvkRytmaFcVfNZYFwOIOuoLSmaRENtsI3f0tsys2qqWOh BtN0l2BO1Nj0YluzsZFNvuEQMEwFM1NPx3GIm9wCpOob+oqvLUgi37LwcsW3kRjU6+lMYoqGdI1y 5x55qsLKyUVNndq3XAsw4k1z7Fp6K04BThx6FxgQYYu4XE3XqZnOjEXMkpG22kLEYqSbm1mUkUI6 ugfdLWkR7KrCdsGM3m/Ul9XNLcCJ/ycY/8n8H9jvONfw8n+xeGj9VyzlRf4vFIdLeSj9IP+XCqVe /n8ksPvNxa+dS31BuOC5+a/P3oDnoPj/2Bn4/fK/PsLw+Of87PTN9b/qxt75J+795Ys7b736YBFN nt3/+5+eLrx2cH/7lQs7L/UfbKL/nH26NlCZRm+cev+dqRef/Fn/P7avDu7+sn9/9FWriX577yu/ /tLOD18+1/feixd21k4ffAe9ceb9j6YePHn5nU/d2nxq50f3vv3Uzv1pDc/+Cg8Mbl77avPdN1uf vv2Hu+jf278Z/Gn/h33Tpyv3Piye/+PrU2++8nL/z/d2f4f3lPrzP7b2LqHvLe5eUD6Hzu6qp9Xz ++tnZk9dG0DNx89+frDv1va57Q9G391/vDD14LGd7/cdPHfqzM7p/MDgW/0Hn/3g7T+f35+bevvZ ve21qb8N7N9H5/vee2nhM0/8ZOr3wkDzc4uzv7j2/A9Obov+p5CMf8s7i4+v9hNwRPyXhvO5sP4b LeZE/Bd78f9oIFL/udiG4k8nCLwBwYnnbiBHVGgPVQAGhyuy4YhepAZRbfjJpNrlmvjW9LppGQLN /JImEx6jLuFNF/jjZCu5VagfBb4FYwSpFz0CscqyWbFMuyZKPm/MwS6c0ovesmHxAl9QGyXwY3Gh IpN1FydC4AsTYJuWBUd4VH2gX6DUUW2owex8FlF4UngKxEwlC1U2dRbMhskD2/h6heNocgLlovVF IOdMBSbNZGJ1jS8Hd5vElyzkyIl6xc53pa9iqG4SDB5HZPtgqTgvVAbctJsklbQRNItQ69nECCqd qEGgSJ41XVknP9syVPgGY8SME/OSDrH9RWgOjQNt3P4wmSbEwqbNpNRR8eM7Y7LlsJTs7pzi5yaU tp4EYmBuYe763OLNtcWl2blkI8H4hiVcq2XaBm1pNcJnaMNpcmIsC4xcIdkYEP2OKPxXYnZdgd7F om46K+fUvC8hg/xsd+QzArGaTTBXQf9l8wUC/ODhjMzbXJWcASaDxidQqYPxhXnYt/VONjkOTCjX wdNkxF322p5Q3PaQVyDb1Cbp1ZBttW0CkVZUYQcTQXMpjZHsZcReSMyKubqSX03ifT9kFBa2aE1N Q6yBSSNNAqfQP5TTmY9ls2NJohvFN5aXFjVopSDkzeqGL1OmkycWgFFEu6XYakcFZNIlUB1aVeS1 FLK5RThoeYT0iRtYre3B1J4BTjV0XM8NiRVzwuWgJ1Uz7SwnwsRk4D4WdhgxkrHlxxQIJzIXopD6 Pf9nWriMlxvENNjW69RtJ1GBpHHkUrUKJ0Ubbed9dJXqTZZgjeGSnJFtnYiYuJ0qJ1B+ZPhqsa3A jNgnKXy5M6dEnAlol8V9hSAVyZnXxVESWbBOXEA4Lq3girXxTMgYkymeU3Lx5jGewS8+TCJMurt3 4dBw4p4FAyJR58ekZe3cmNyeQwghZ0ozUyCkHYRb3TU79ByLqpxF+UiK8zwtMgfYPDwZkorFYjj0 WZmJYAnWJJqWzsTVOcQboiEWuMEcdq0NhCvU5UIs2N4ItxY6xsX2YJeGvn1bJYLUv69SN73ALSNF 3gQoW53RFJflOlSYyAAVa5h3DXAowLyai8hBZFDC7DQP+LFl0VbXSwakAy4SpP5FUNdkoHG67GWz IC98vJKhTqG24npNEfnKn7LsLZeN2LUced+Shy+If4PUREvhypmQ5d/1eadBOxEyce1oQA5owIeW Ct+wYcwJfHhF6N3zwdkjc2FWVjIne4XyiYZk/9fAtlkljEMHSO1jWuOo+5/RUvj3v0KxkBf3/8OF Yq//exQgIl0JNn0NTn6RC5QyKmRFAlPEBSZ8KaLMQtJPkOcoiqgMFThmvb8ISh7F6yTDjIJYnbYY FDfRpIUME0PShyTF69BsQGKzTKdCsWt4/YWXCgjW64ibcDpjmSJwUMEjh7ggQ4MYcv22vEpOy8ux dnqFYZnIFNHHCqJE6lUAu+Ux+WuvSXUY0K4EvLfllxLejCjIL4sVSL9rVRdMJEjEieSPNzCHgtHn q3PulIeGLg9dhvTpfTH/E4kyeUvOplRcMBZx16SubdHhAMVNi6+Ji1mhQnBB25YdbNKAI1OcCGJF yRY7KATtKuTjk3a2HvSgBz3oQQ968H8B/wU1tGE0ACgAAA==
Untar with: base64 -d | gunzip -c | tar xv
Update 2017-01-27
A possible improvment to isSuspicious
could be:
function isSuspicious(node) { if (node.nodeType == node.ELEMENT_NODE) { var style = window.getComputedStyle(node); var autoOrZero = function(val) { return val == 'auto' || parseInt(val) == 0; }; var checks = [ ['color', style.color == style.backgroundColor, style.color], ['fontSize', parseInt(style.fontSize) <= 5, style.fontSize], ['zIndex', !autoOrZero(style.zIndex), style.zIndex], ['left', !autoOrZero(style.left), style.left], ['right', !autoOrZero(style.right), style.right], ['top', !autoOrZero(style.top), style.top], ['bottom', !autoOrZero(style.bottom), style.bottom], ['userSelect', style.userSelect == 'none', style.userSelect] ]; var matches = 0; for (var i in checks) { if (checks[i][1]) { matches += 1; } } if (matches >= 2) { console.log('Looks suspicious to me:') console.log(node) console.log(JSON.stringify(checks)) return true; } } }