Earlier today I needed to debug an incompatibility between an updated version of Apache and a customer’s HTTPS API client application. Unfortunately, even with the server’s private key Wireshark was unable to decrypt the packet stream. It gave the following message “ssl_decrypt_pre_master_secret session uses DH (17) key exchange, which is impossible to decrypt”.
There are a number of HTTPS debugging proxies (notably Fiddler) for this type of work, but because I was trying to debug what looked to be protocol violation, I wanted something that would preserve the HTTP stream byte for byte. The solution was stunnel to proxy the connect and tcpdump on the local interface. Essentially this creates an intentional man-in-the-middle attack. As with all SSL, it’s important that outgoing domain name (The “connect = www.mydomain.com:443″ line) match the SubjectName of the certificate at the destination. Additionally, newer Apache’s verify that the HTTP Host head match the SNI provided by the SSL connection and will 400 on mismatch.
Stunnel4 Configuration is below:
setuid = stunnel4
setgid = stunnel4
pid = /var/run/stunnel4/stunnel4.pid
debug = 7
output = /var/log/stunnel4/stunnel.log
cert = /etc/stunnel/www.mydomain.pem
options = SINGLE_ECDH_USE
options = SINGLE_DH_USE
verify = 2
CApath = /etc/ssl/certs
; Accept cleartext on port 10443 and relay it to www.mydomain.com:443
accept = 127.0.0.1:10443
client = yes
connect = www.mydomain.com:443
; Accept SSL stream from client on 443, and relay it to the above cleartext socket @ 127.0.0.1:10443
accept = 443
connect = 127.0.0.1:10443
; TIMEOUTclose is only necessary for older Microsoft SSL - read up on it in man stunnel4
TIMEOUTclose = 0
tcpdump -i lo -s0 -w ~/www.mydomain.com.10443.cleartext.pcap tcp port 10443
Kyoto Tycoon and PHP’s cURL implementation seem to disagree regarding HTTP1.1′s chunked encoding. All of the POST’ed records over 1024bytes in length would stall for just over 1 second. The ultimate fix (discussed in the comments section of the curl_setopt PHP manual) was to disable the ‘Expect’ header.
$request = base64_encode("DB")."\t".base64_encode("myHashTable.kch")."\n".
"Content-Type: text/tab-separated-values; colenc=B" ,
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $request);
trigger_error('CURL ERROR: '.curl_error($ch));
trigger_error('CURL INFO: '.print_r(curl_getinfo($ch),true));
As explained in my last post, I’m moving from a Fusecompress 2.0 backend to the more stable 0.9.x branch. I’m using the following code to move the repository, incrementally (and in recoverable way). The disk is too small to maintain both copies at once. Because the normal tools (mv/cp/rsync) work depth-first and daily.0 is nearly identical to daily.1 (and daily.2, etc.), only a small amount of space is freed after moving daily.0 and the disk eventually fills.
As with everything posted here, use this at your own risk and ALWAYS backup!
# Create the directories
find . -type d -print0 | while read -r -d '' file
if test ! -d "$dest/$file"
mkdir -vp "$dest/$file"
touch --reference="$file" "$dest/$file"
chown --reference="$file" "$dest/$file"
# Move the files in, link any duplicates, and remove the original.
for ssnap in `ls $source`
find . ! -type d -print0 | while read -r -d '' file
if test ! -e "$dest/$ssnap/$file" -o -h "$dest/$ssnap/$file"
cp -adnvp "$file" "$dest/$ssnap/$file"
echo "$dest/$ssnap/$file" exists.
for dsnap in `ls $source`
if test "$ssnap" = "$dsnap"
echo "Checking $dsnap - ../$dsnap/$file"
if test "$file" -ef "../$dsnap/$file";
ln -v "$dest/$ssnap/$file" "$dest/$dsnap/$file"
rm -vf "../$dsnap/$file"
rm -vf "$file"
Updated 2012-01-03: Filenames/Directories names with new line character caused mischief. Fixed by null separating. Also included symlinks, char devices, and other special filetypes in the transfer.
I use fusecompress as a backing file system for rsnapshot. Unfortunately I’ve been having crashes with the 2.0 C++ fork, so I’ve tried converting back to the 0.9.x C implementation. That’s been a tricky process because my disks are only just a bit too small for two concurrent copies of the rsnapshot tree. I’ve been using rsync to duplicate the tree, but because of the disk overfill it’s failed a few times. RSnapshot uses hard links to deduplicate files between snapshots (daily.0, daily.1, etc). Since rsync had failed a few times I wasn’t sure that it would maintain hard links consistently between the yet to sync’ed portion and the previously sync’ed portion after being restarted. This script compares the hardlink structure between the two disk stores and generates a script to remedy any inconsistencies. That said, rsync seems to behave properly because the script found no inconsistencies.
find . -type f | while read file
do if (test "$file" -ef "../daily.2/$file");
if ! (test "/home/backup.new.fusecompress.backing/snapshot/daily.1/$file" -ef "/home/backup.new.fusecompress.backing/snapshot/daily.2/$file")
echo rm -f "/home/backup.new.fusecompress.backing/snapshot/daily.2/$file" >> /root/linked.daily.1
echo ln "/home/backup.new.fusecompress.backing/snapshot/daily.1/$file" "/home/backup.new.fusecompress.backing/snapshot/daily.2/$file" >> /root/linked.daily.1