At my first job back in 1992 I had three things on my desk: a big phone, a 486SX PC running Windows 3.0 and a DEC VT320 terminal. Even back then those were pretty outdated, but we still used them for our helpdesk ticket system and our in-company email. (By the way, I recommend that everyone in tech starts as a helpdesker.) Five years later, I started a company with four others, and the first business we did was collect a bunch of VT420 terminals, which we then sold for ƒ 25,- a piece. I kept one for myself.
So the Digital VT100 terminal family holds a special place in the retro tech corner of my heart. Over the years, I tried to connect the terminal to my Mac using a USB-to-serial converter a few times, but never got anywhere. Today, I tried again, and finally got everything to work.
Read the article - posted 2020-01-25
►
A few days ago I ran into this blog post from 2012: Deprecate, Deprecate, Deprecate, which lists a bunch of IPv6 stuff that's been "deprecated" by the IETF. That means: we changed our minds about this protocol or feature, stop using it.
Full article / permalink - posted 2020-01-13
▼
A few days ago I ran into this blog post from 2012: Deprecate, Deprecate, Deprecate, which lists a bunch of IPv6 stuff that's been "deprecated" by the IETF. That means: we changed our minds about this protocol or feature, stop using it.
The list (the blog post obviously has more information):
- IPv4-compatible IPv6 Addresses. Status: Deprecated.
- Site-Local Addresses. Status: Deprecated.
- The 6bone. Status: Deprecated.
- ipv6.exe (Windows XP). Status: Deprecated.
- NAT-PT and NAPT-PT. Status: Deprecated.
- The Type 0 Routing Header. Status: Deprecated.
- Your valid yet older SLAAC IPv6 addresses. Status: Valid (but deprecated).
But what, no IPv6/DNS-related deprecations?
Perhaps the most annoying one of those was the change from ip6.int to ip6.arpa. Originally, the idea was to have reverse mapping of IPv6 addresses under the ip6.int domain name. So for instance, the IPv6 address of this server is 2a01:7c8:aaaa:1fb::2. In the reverse DNS that would then become the following, with a PTR record pointing to the server's name:
2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.b.f.1.0.a.a.a.a.8.c.7.0.1.0.a.2.ip6.int
Then, around 2003/2005 they decided to change this to:
2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.b.f.1.0.a.a.a.a.8.c.7.0.1.0.a.2.ip6.arpa
Which of course led to the situation where you'd get inconsistent results for years as people made the change at various times. So annoying, especially because it's just a cosmetic change, but an invisible one!
There were also some more substantial IPv6/DNS-related deprecations: A6 records and bitlabels. The idea behind those is that IPv6 should make it simple to renumber. For this purpose, a system was designed where a DNS record wouldn't hold the entire IPv6 address, but parts of it. So if you move a bunch of systems to a different subnet, you just change the record that has these subnet bits, and all the other partial address records remain the same. Unfortunately, this proved too ambitious. Not only did we move to AAAA records, which are four times as big as A records, only four times as big, but support for A6 records and bitlabels was swiftly removed from BIND. This actually caused me some trouble as the zone file with my A6/bitlabel experiments in it suddenly wasn't recognized by other servers anymore.
Moral of the story: measure twice, cut once.
Permalink - posted 2020-01-13
►
In a paper for the HotNets'19, seven researchers admit that "beating BGP is harder than we thought". (Discovered through Aaron '0x88cc' Glenn.) The researchers looked at techniques used by big content delivery networks, including Google, Microsoft and Facebook, to deliver content to users as quickly as possible. This varies from using DNS redirects to PoPs (points of presence) close to the user, using BGP anycast to route requests to a PoP closeby and keeping data within the CDN's network as long as possible ("late exit" or "colt potato" routing).
Turns out, all this extra effort only manages to beat BGP as deployed on the public internet a small fraction of the time.
Full article / permalink - posted 2019-12-30
▼
In a paper for the HotNets'19, seven researchers admit that "beating BGP is harder than we thought". (Discovered through Aaron '0x88cc' Glenn.) The researchers looked at techniques used by big content delivery networks, including Google, Microsoft and Facebook, to deliver content to users as quickly as possible. This varies from using DNS redirects to PoPs (points of presence) close to the user, using BGP anycast to route requests to a PoP closeby and keeping data within the CDN's network as long as possible ("late exit" or "colt potato" routing).
Turns out, all this extra effort only manages to beat BGP as deployed on the public internet a small fraction of the time. So it's probably not really worth the effort. Also interesting: when BGP is worse, that's usually consistent over relatively long timescales, and when things deteriorate over one path, they tend to also get worse over alternative paths.
That seems strange, as the authors observe that "BGP, the Internet’s inter-domain routing protocol, is oblivious to performance and performance changes." However, BGP isn't deployed in a vacuum. People either install capacity where BGP is going to use it, or they tune BGP parameters to use capacity that's installed.
So having automated traffic management doesn't seem to help much—or does it? Maybe all paths deteriorate together because the automated traffic management solutions do their job and distribute the traffic equally over the available paths, keeping performance the same.
However, there are some cases when one of the options (regular BGP, anycast, late exit, DNS redirect) performs a lot poorer than the alternative(s). So it's probably more important to focus on avoiding really bad paths rather than trying to pick really good ones.
Also interesting: apparently ISPs rarely include the EDNS client subnet option, which DNS redirect really needs to work well. However, the Google and OpenDNS/Cisco public DNS services seem to do support it (but not CloudFlare), so if you're experiencing poor performance towards a CDN, you may want to try using the Google or OpenDNS DNS servers.
Download the paper here.
Permalink - posted 2019-12-30