12/21/2014

Benefits of a dedicated server

Wondering why you should get a dedicated server instead of shared hosting?
No matter is it seedbox, webhosting etc. but the same benefits stands for dedicated servers, if you can afford that little bit of extra. So let's take a look at them, shall we? :)

Performance, Power

You will gain stable high performance and processing power. Shared services usually have much higher markup and operate on much stronger servers, but also a huge amount of users in case of webhosting.

You will have the CPU, RAM and Disks just for you. In some cases you can even choose emphasis on CPU, Ram or Disks depending upon your particular needs.

Flexibility

On a Dedicated server you can run whatever software you please and are capable of installing and configuring yourself, even obscure little pieces of software.
You can even share your dedicated server if you have the skills to make the shared environment to share the costs of a such server.

Security

No other users sharing the disks, only the software running you choose to: This usually translates into less potential attack vectors, however, you need to make sure your server is secure yourself, but potentially a dedicated server is much more secure if you put in the right tools, keep it updated etc.

Ping and SSH Port

Oh one more thing, changing SSH port and disabling Ping is not security, it's a nuisance and annoyance to the support team, and will slow your support requests, create additional work for the support team, and provides barely no additional security what-so-ever. Any even semi capable hacker will know how to scan for alive hosts without Ping and find the SSH port very trivially with barely no slowdown. Plus if you run a webserver - everyone will know the node is up - so what's the point?
Don't do. Breaking standards and annoying your provider is not worth the 5secs of slowdown a potential attacker will have to find these out in any case.

10/27/2014

Choosing a seedbox provider

Choosing a seedbox provider can be hard! So many choices, so many offers, so many caveats!

Do i want unmetered or traffic limits? Do i want 10Gbps down speed or is 100Mbps sufficient? What is transmission? What is rTorrent? What's deluge?

It's then best to go someone like Pulsed Media with their 14day moneyback guarantee and emphasis on ease of use, making it risk free. They even have Bitcoins as payment method :)

But if you are looking for 10Gig speeds, then someone like feralhosting is more to your needs.

10Gbps, 1Gbps or 100Mbps?
If you are looking just for a downbox, downloading stuff via a 3rd party server quicker and then FTP'ng that data to home, then 100Mbps can be sufficient since 9gb file still downloads in under 15mins, and your home connection is likely much more slower than this, if you have fast download speed at home, then you need 1Gbps - at least down if not upload as well.

If you are seeding or publishing your own content - you need to have more bandwidth available to you, a 1Gbps starts to become a necessity for certain users, but 100Mbps is still a fair bit of bandwidth if you are looking to publish something on a public tracker, that's fair bit, it's still 700Gb a day!
If you are competing with others, then you definitively need as much bandwidth as possible. 10Gbps, Autodl-irssi etc. becomes a necessity of life very quickly.


9/29/2014

Peering vs Routing - what's this confusion about?

There's a bit of confusion and misleading going around in regards network peering and routing, in lay man's terms.

Some people in authority position are giving misleading information to get to through to the average seedbox user. These people are defining peering as it were routing - well, guess what? It's NOT! And now we get people asking "how's your peering?" when they actually mean how their data will be routed for. Peering is an agreement between 2 parties to connect to each other and share traffic, in approximately 1:1 ratio usually.

Let's look at the Wikipedia definitions!
Peering synopsis as follows:
In computer networkingpeering is a voluntary interconnection of administratively separate Internet networks for the purpose of exchanging traffic between the users of each network. The pure definition of peering is settlement-free, "bill-and-keep," or "sender keeps all," meaning that neither party pays the other in association with the exchange of traffic; instead, each derives and retains revenue from its own customers.

Routing synopsis:

Routing is the process of selecting best paths in a network. In the past, the term routing was also used to mean forwarding network traffic among networks. However this latter function is much better described as simply forwarding. Routing is performed for many kinds of networks, including the telephone network (circuit switching), electronic data networks (such as the Internet), and transportation networks. This article is concerned primarily with routing in electronic data networks using packet switching technology.

When people give advice and discuss about this, they get the two confused, even the self-claimed professional and authority figures who might even be in the hosting or networking business!

So in short:
Peering is an agreement between 2 networks to send directly to each other traffic without compensation and thus avoiding transit costs and making the latency lower between the two networks and hence increasing networking quality. These 2 networks need to be geographically meeting in the same location.

Routing is how the packets move in the internet, through which networks a packet routes through to get to the end recipient, finding the shortest and fastest path between these two networks.

9/10/2014

Storage software, hardware - the whole industry: It's bullshit

Storage industry changes the idiom: Lies, damn lies and statistics to something new: Lies, damn lies and Storage Industry!

Tbh i would trust more a sleezy sub-1000$ used car dealer than the storage industry after dabbling into for my work into the big storage field. Just some pointers here:

ZFS:
Intermittent hardware failure is almost guaranteed to nuke your data, ZFS never goes to read only mode for one.
Performance sucks, it's the worst design in regards of performance - UNLESS - you need single user sequential. In RAIDZ a single read will usually activate *all* drives in the vdev, seriously, ALL drives? It ruins the IOPS completely. Your other choice is mirroring - which will still remove half of your storage and half of your performance.
Don't believe the lies, ZFS is not the golden bullet solve-it-all, it's pure lies when they say that ZFS outperforms everything and anything - it doesn't because it lacks the IOPS capability.
For sequential one user at a time access it does have huge performance tho, so if that's your usage and you don't mind risking your data - go with it!
ZFS has good SSD caching tho, it's extremely effective. Problem is that it takes weeks to warm up in any sensible capacity. Weeks upon weeks. Yes, it really takes that long to warm. Tho, once it is warm, it does provide the most common data immensively fast.
So ZFS does have it PROs too! :) But for a multi-user setup? If you can handle the loss of IOPS and data risk you might be able to use it.

Older Adaptec Cards, couple gens old:
It seems almost like it has a timer, more time passed since release, more the card "sleeps". Original reviews of the cards gave brilliant reviews and performance, with many years more modern drives today the cards didn't even reach half of the performance from the reviews from release time. Card was SATA II capable.

SATA II Promise cards:
Drive fails? Sorry no go, your system won't boot up and you cannot diagnose the issue. Please try to guess which drive failed.
Further, in JBOD mode you cannot even get serial of a drive in any normal method.

RocketRAID, LSI MegaRaid + Newer AMD FX CPU / Chipset
RocketRaids loose connection to drives every now and then with no reason, not sure if this happens on Intel motherboards as well.
LSI MegaRaid won't let the system boot with Udevd modprobe kill timeouts. Apparently this happens only with newer AMD FX platforms.

Legacy RAID:
Way way too slow recheck, resync times, lacks flexibility etc. for today's huge drives. Otherwise very stable tho, decent performance.

Btier:
Nukes data, corrupting the OS if you have very high load, i would assume a block move from tier to another fails?
This same happens if you try to manually move A LOT of blocks from tier to tier.
Very nice idea tho, just needs some more work.

SSD Caching:
All the usual methods, Flashcache, EnhanceIO etc. sucks and bad, you are lucky to get 2 magnetic drives worth of performance from your expensive SSD cache. It's better to just get more magnetic drives for that array, and use the SSDs completely separately.
All assume you have infinite write speed, and eventually the SSD Cache will work as a SSD Bottleneck, way to solve it is secure erase the SSD and then re-enable caching.

SSDs:
Still need to provision 15%+ of space to sit empty for firmware to handle wear leveling, trim etc. discreetly on the background. Firmwares have huge differences, one SSD might need 25%+ and other is fine with 5%+ for this purpose.
Trim/Discard in many SSD cache packages doesn't actually work.
They also fail. They fail a lot. Infact, just don't use them, ok??
If you plan to use them in any other scenario than traditional disk / raid, they tend to fail super quickly.
As a single desktop drive, or perhaps RAID0 on a server or even RAID5 they work insanely well tho. Haven't tested as journal only device tho.

8/20/2014

Blazing fast speeds on a shared seedbox server?

YES, that is possible! and not only with SSD servers, but also with more traditional servers.
You need something like Pulsed Media's Super100 seedbox, with few users per disk and traffic limits ensuring you get a fair piece of the pie that is bandwidth.

You will then be able to use much more larger share of the pie at the time you need it, rather than potentially be limited by resource hungry other users on the server, and thus getting blazing fast seedbox speeds.

Look for the overall offer, if it has traffic limits you can potentially have much higher burst performance than otherwise expected.

8/05/2014

datacenter wiring: Network patch panels, switches, wiring mess


The music is just as terrifying as the cabling!
Now, try to work on that.

Datacenter cabling really should look something more like this:
or this:

Untangling the hot mess that is in the first vid is going to take some major work!
Here is an example, with just few racks involved, but whole ton of switch ports connected to them:
You can read more about this at: http://linkstate.wordpress.com/category/the-big-weekend/