9/29/2014

Peering vs Routing - what's this confusion about?

There's a bit of confusion and misleading going around in regards network peering and routing, in lay man's terms.

Some people in authority position are giving misleading information to get to through to the average seedbox user. These people are defining peering as it were routing - well, guess what? It's NOT! And now we get people asking "how's your peering?" when they actually mean how their data will be routed for. Peering is an agreement between 2 parties to connect to each other and share traffic, in approximately 1:1 ratio usually.

Let's look at the Wikipedia definitions!
Peering synopsis as follows:
In computer networkingpeering is a voluntary interconnection of administratively separate Internet networks for the purpose of exchanging traffic between the users of each network. The pure definition of peering is settlement-free, "bill-and-keep," or "sender keeps all," meaning that neither party pays the other in association with the exchange of traffic; instead, each derives and retains revenue from its own customers.

Routing synopsis:

Routing is the process of selecting best paths in a network. In the past, the term routing was also used to mean forwarding network traffic among networks. However this latter function is much better described as simply forwarding. Routing is performed for many kinds of networks, including the telephone network (circuit switching), electronic data networks (such as the Internet), and transportation networks. This article is concerned primarily with routing in electronic data networks using packet switching technology.

When people give advice and discuss about this, they get the two confused, even the self-claimed professional and authority figures who might even be in the hosting or networking business!

So in short:
Peering is an agreement between 2 networks to send directly to each other traffic without compensation and thus avoiding transit costs and making the latency lower between the two networks and hence increasing networking quality. These 2 networks need to be geographically meeting in the same location.

Routing is how the packets move in the internet, through which networks a packet routes through to get to the end recipient, finding the shortest and fastest path between these two networks.

9/10/2014

Storage software, hardware - the whole industry: It's bullshit

Storage industry changes the idiom: Lies, damn lies and statistics to something new: Lies, damn lies and Storage Industry!

Tbh i would trust more a sleezy sub-1000$ used car dealer than the storage industry after dabbling into for my work into the big storage field. Just some pointers here:

ZFS:
Intermittent hardware failure is almost guaranteed to nuke your data, ZFS never goes to read only mode for one.
Performance sucks, it's the worst design in regards of performance - UNLESS - you need single user sequential. In RAIDZ a single read will usually activate *all* drives in the vdev, seriously, ALL drives? It ruins the IOPS completely. Your other choice is mirroring - which will still remove half of your storage and half of your performance.
Don't believe the lies, ZFS is not the golden bullet solve-it-all, it's pure lies when they say that ZFS outperforms everything and anything - it doesn't because it lacks the IOPS capability.
For sequential one user at a time access it does have huge performance tho, so if that's your usage and you don't mind risking your data - go with it!
ZFS has good SSD caching tho, it's extremely effective. Problem is that it takes weeks to warm up in any sensible capacity. Weeks upon weeks. Yes, it really takes that long to warm. Tho, once it is warm, it does provide the most common data immensively fast.
So ZFS does have it PROs too! :) But for a multi-user setup? If you can handle the loss of IOPS and data risk you might be able to use it.

Older Adaptec Cards, couple gens old:
It seems almost like it has a timer, more time passed since release, more the card "sleeps". Original reviews of the cards gave brilliant reviews and performance, with many years more modern drives today the cards didn't even reach half of the performance from the reviews from release time. Card was SATA II capable.

SATA II Promise cards:
Drive fails? Sorry no go, your system won't boot up and you cannot diagnose the issue. Please try to guess which drive failed.
Further, in JBOD mode you cannot even get serial of a drive in any normal method.

RocketRAID, LSI MegaRaid + Newer AMD FX CPU / Chipset
RocketRaids loose connection to drives every now and then with no reason, not sure if this happens on Intel motherboards as well.
LSI MegaRaid won't let the system boot with Udevd modprobe kill timeouts. Apparently this happens only with newer AMD FX platforms.

Legacy RAID:
Way way too slow recheck, resync times, lacks flexibility etc. for today's huge drives. Otherwise very stable tho, decent performance.

Btier:
Nukes data, corrupting the OS if you have very high load, i would assume a block move from tier to another fails?
This same happens if you try to manually move A LOT of blocks from tier to tier.
Very nice idea tho, just needs some more work.

SSD Caching:
All the usual methods, Flashcache, EnhanceIO etc. sucks and bad, you are lucky to get 2 magnetic drives worth of performance from your expensive SSD cache. It's better to just get more magnetic drives for that array, and use the SSDs completely separately.
All assume you have infinite write speed, and eventually the SSD Cache will work as a SSD Bottleneck, way to solve it is secure erase the SSD and then re-enable caching.

SSDs:
Still need to provision 15%+ of space to sit empty for firmware to handle wear leveling, trim etc. discreetly on the background. Firmwares have huge differences, one SSD might need 25%+ and other is fine with 5%+ for this purpose.
Trim/Discard in many SSD cache packages doesn't actually work.
They also fail. They fail a lot. Infact, just don't use them, ok??
If you plan to use them in any other scenario than traditional disk / raid, they tend to fail super quickly.
As a single desktop drive, or perhaps RAID0 on a server or even RAID5 they work insanely well tho. Haven't tested as journal only device tho.