12/21/2014

Benefits of a dedicated server

Wondering why you should get a dedicated server instead of shared hosting?
No matter is it seedbox, webhosting etc. but the same benefits stands for dedicated servers, if you can afford that little bit of extra. So let's take a look at them, shall we? :)

Performance, Power

You will gain stable high performance and processing power. Shared services usually have much higher markup and operate on much stronger servers, but also a huge amount of users in case of webhosting.

You will have the CPU, RAM and Disks just for you. In some cases you can even choose emphasis on CPU, Ram or Disks depending upon your particular needs.

Flexibility

On a Dedicated server you can run whatever software you please and are capable of installing and configuring yourself, even obscure little pieces of software.
You can even share your dedicated server if you have the skills to make the shared environment to share the costs of a such server.

Security

No other users sharing the disks, only the software running you choose to: This usually translates into less potential attack vectors, however, you need to make sure your server is secure yourself, but potentially a dedicated server is much more secure if you put in the right tools, keep it updated etc.

Ping and SSH Port

Oh one more thing, changing SSH port and disabling Ping is not security, it's a nuisance and annoyance to the support team, and will slow your support requests, create additional work for the support team, and provides barely no additional security what-so-ever. Any even semi capable hacker will know how to scan for alive hosts without Ping and find the SSH port very trivially with barely no slowdown. Plus if you run a webserver - everyone will know the node is up - so what's the point?
Don't do. Breaking standards and annoying your provider is not worth the 5secs of slowdown a potential attacker will have to find these out in any case.

10/27/2014

Choosing a seedbox provider

Choosing a seedbox provider can be hard! So many choices, so many offers, so many caveats!

Do i want unmetered or traffic limits? Do i want 10Gbps down speed or is 100Mbps sufficient? What is transmission? What is rTorrent? What's deluge?

It's then best to go someone like Pulsed Media with their 14day moneyback guarantee and emphasis on ease of use, making it risk free. They even have Bitcoins as payment method :)

But if you are looking for 10Gig speeds, then someone like feralhosting is more to your needs.

10Gbps, 1Gbps or 100Mbps?
If you are looking just for a downbox, downloading stuff via a 3rd party server quicker and then FTP'ng that data to home, then 100Mbps can be sufficient since 9gb file still downloads in under 15mins, and your home connection is likely much more slower than this, if you have fast download speed at home, then you need 1Gbps - at least down if not upload as well.

If you are seeding or publishing your own content - you need to have more bandwidth available to you, a 1Gbps starts to become a necessity for certain users, but 100Mbps is still a fair bit of bandwidth if you are looking to publish something on a public tracker, that's fair bit, it's still 700Gb a day!
If you are competing with others, then you definitively need as much bandwidth as possible. 10Gbps, Autodl-irssi etc. becomes a necessity of life very quickly.


9/29/2014

Peering vs Routing - what's this confusion about?

There's a bit of confusion and misleading going around in regards network peering and routing, in lay man's terms.

Some people in authority position are giving misleading information to get to through to the average seedbox user. These people are defining peering as it were routing - well, guess what? It's NOT! And now we get people asking "how's your peering?" when they actually mean how their data will be routed for. Peering is an agreement between 2 parties to connect to each other and share traffic, in approximately 1:1 ratio usually.

Let's look at the Wikipedia definitions!
Peering synopsis as follows:
In computer networkingpeering is a voluntary interconnection of administratively separate Internet networks for the purpose of exchanging traffic between the users of each network. The pure definition of peering is settlement-free, "bill-and-keep," or "sender keeps all," meaning that neither party pays the other in association with the exchange of traffic; instead, each derives and retains revenue from its own customers.

Routing synopsis:

Routing is the process of selecting best paths in a network. In the past, the term routing was also used to mean forwarding network traffic among networks. However this latter function is much better described as simply forwarding. Routing is performed for many kinds of networks, including the telephone network (circuit switching), electronic data networks (such as the Internet), and transportation networks. This article is concerned primarily with routing in electronic data networks using packet switching technology.

When people give advice and discuss about this, they get the two confused, even the self-claimed professional and authority figures who might even be in the hosting or networking business!

So in short:
Peering is an agreement between 2 networks to send directly to each other traffic without compensation and thus avoiding transit costs and making the latency lower between the two networks and hence increasing networking quality. These 2 networks need to be geographically meeting in the same location.

Routing is how the packets move in the internet, through which networks a packet routes through to get to the end recipient, finding the shortest and fastest path between these two networks.

9/10/2014

Storage software, hardware - the whole industry: It's bullshit

Storage industry changes the idiom: Lies, damn lies and statistics to something new: Lies, damn lies and Storage Industry!

Tbh i would trust more a sleezy sub-1000$ used car dealer than the storage industry after dabbling into for my work into the big storage field. Just some pointers here:

ZFS:
Intermittent hardware failure is almost guaranteed to nuke your data, ZFS never goes to read only mode for one.
Performance sucks, it's the worst design in regards of performance - UNLESS - you need single user sequential. In RAIDZ a single read will usually activate *all* drives in the vdev, seriously, ALL drives? It ruins the IOPS completely. Your other choice is mirroring - which will still remove half of your storage and half of your performance.
Don't believe the lies, ZFS is not the golden bullet solve-it-all, it's pure lies when they say that ZFS outperforms everything and anything - it doesn't because it lacks the IOPS capability.
For sequential one user at a time access it does have huge performance tho, so if that's your usage and you don't mind risking your data - go with it!
ZFS has good SSD caching tho, it's extremely effective. Problem is that it takes weeks to warm up in any sensible capacity. Weeks upon weeks. Yes, it really takes that long to warm. Tho, once it is warm, it does provide the most common data immensively fast.
So ZFS does have it PROs too! :) But for a multi-user setup? If you can handle the loss of IOPS and data risk you might be able to use it.

Older Adaptec Cards, couple gens old:
It seems almost like it has a timer, more time passed since release, more the card "sleeps". Original reviews of the cards gave brilliant reviews and performance, with many years more modern drives today the cards didn't even reach half of the performance from the reviews from release time. Card was SATA II capable.

SATA II Promise cards:
Drive fails? Sorry no go, your system won't boot up and you cannot diagnose the issue. Please try to guess which drive failed.
Further, in JBOD mode you cannot even get serial of a drive in any normal method.

RocketRAID, LSI MegaRaid + Newer AMD FX CPU / Chipset
RocketRaids loose connection to drives every now and then with no reason, not sure if this happens on Intel motherboards as well.
LSI MegaRaid won't let the system boot with Udevd modprobe kill timeouts. Apparently this happens only with newer AMD FX platforms.

Legacy RAID:
Way way too slow recheck, resync times, lacks flexibility etc. for today's huge drives. Otherwise very stable tho, decent performance.

Btier:
Nukes data, corrupting the OS if you have very high load, i would assume a block move from tier to another fails?
This same happens if you try to manually move A LOT of blocks from tier to tier.
Very nice idea tho, just needs some more work.

SSD Caching:
All the usual methods, Flashcache, EnhanceIO etc. sucks and bad, you are lucky to get 2 magnetic drives worth of performance from your expensive SSD cache. It's better to just get more magnetic drives for that array, and use the SSDs completely separately.
All assume you have infinite write speed, and eventually the SSD Cache will work as a SSD Bottleneck, way to solve it is secure erase the SSD and then re-enable caching.

SSDs:
Still need to provision 15%+ of space to sit empty for firmware to handle wear leveling, trim etc. discreetly on the background. Firmwares have huge differences, one SSD might need 25%+ and other is fine with 5%+ for this purpose.
Trim/Discard in many SSD cache packages doesn't actually work.
They also fail. They fail a lot. Infact, just don't use them, ok??
If you plan to use them in any other scenario than traditional disk / raid, they tend to fail super quickly.
As a single desktop drive, or perhaps RAID0 on a server or even RAID5 they work insanely well tho. Haven't tested as journal only device tho.

8/20/2014

Blazing fast speeds on a shared seedbox server?

YES, that is possible! and not only with SSD servers, but also with more traditional servers.
You need something like Pulsed Media's Super100 seedbox, with few users per disk and traffic limits ensuring you get a fair piece of the pie that is bandwidth.

You will then be able to use much more larger share of the pie at the time you need it, rather than potentially be limited by resource hungry other users on the server, and thus getting blazing fast seedbox speeds.

Look for the overall offer, if it has traffic limits you can potentially have much higher burst performance than otherwise expected.

8/05/2014

datacenter wiring: Network patch panels, switches, wiring mess


The music is just as terrifying as the cabling!
Now, try to work on that.

Datacenter cabling really should look something more like this:
or this:

Untangling the hot mess that is in the first vid is going to take some major work!
Here is an example, with just few racks involved, but whole ton of switch ports connected to them:
You can read more about this at: http://linkstate.wordpress.com/category/the-big-weekend/

7/14/2014

Minimizing last gen Dell server power consumption

I got my hands on a used Dell cloud chassis with 4x nodes with 2xQC Intel Xeon CPUs each and 24G ram on each, for total of 96G.

Initially, when first time booting up it consumed 1050-1200W out of the wall. That's insane!
Fans were screaming like hurricane etc.

After a little bit of fiddling, slotting the nodes out and back in again, it stabilized to 690W under 100% CPU load finally. This is still way too much.

No power saving setting in BIOS seemed to help at all. Linux speedstepping was a bit erratic as well.

Took out the 2nd cpu was in my use those are not needed and slotted out half of the ram as well which was dedicated to the 2nd cpu and finally down to 320W under load, measured from the wall!

Now that is sweet: 320W for 4xXeon servers with 12G ECC Ram on each :)

7/10/2014

making network cables

Got some gear in and needed custom length network cables, since i hadnt made any cables for over a decade i had to get new reel, plugs, pliers etc.

On to making cables, i recalled it being super simple, but somehow after hours i had 0 working cables - WTF?! always some issues.

I recall it being so easy that i made my first cables even without checking up how to make them, and on first attempt for a long run. Wierd, when did it become this hard?

For a while, i used way too long cables, but then i checked out youtube that i better make good cables and stumbled upon this funny tek syndicate tutorial video:



7/03/2014

Cogent, Verizon, Comcast. Get your act together - NOW!

Cogent, Verizon and Comcast - Get your damn act together now for the sake of users!

Cogent is a brilliant disruptive force in the networking industry - they aim to have the best pricing around, and aim constantly to make better offers for their customers, hence their network has grown really fast.

Verizon and Comcast are the big bad telcos in the states who want to control everything. They all have a peering agreement between each other.

Thing is, Cogent has a lot of customers who use tons of bandwidth as their Mbps rates are generally the lowest. In today's market as low as around 0,45€/Mbps! (4500€ for 10Gig link)

Hence, Cogent is the choice for parties who move a lot of data, such as Netflix.

Verizon and Comcast are telcos, they have their network extending for the last mile and they too have vast backbone networks to connect all the metropolitan areas together, all the counties and states.

Cogent doesn't offer connections to the end user, they focus on datacenters, backbone networking, connecting different datacenters, networks and telcos together.

For more than a year Verizon and Comcast has been refusing to upgrade the peering with Cogent because they are such a disruptive force and moves so much traffic, Cogent gets the big data customers which Verizon and Comcast are unable to reach.
Verizon and Comcast have the eyeballs, Cogent have the providers, until lately Cogent has been starting to get the eyeballs as well from other countries smaller telcos getting their transit agreement.

We cannot know what precisely is going behind the curtains, but my bet is that Verizon and Comcast would like to charge Cogent for access to their eyeballs, and hence refusing to upgrade the peering.
On grassroots levels it's also been suspected that Verizon and Comcast throttle traffic from various Cogent sources, and some of it has been on the news, comcast refusing to upgrade to fix congestion to force netflix into interconnection deal.

At the end of the day, it's all users who need traffic between verizon and cogent, or comcast and cogent who suffers. My guess is that the traffic is mostly one sided, most traffic flows from cogent network to verizon and comcast, since that is the nature.
Cogent has a vastly "upstream" biased network, while Comcast and Verizon offering the last mile has vastly "downstream" biased network - or so i would assume.
However, Cogent is scoring more and more local ISPs in other countries than the states - so we might see that change eventually.

6/26/2014

Why shared over dedi?

In many cases shared seedbox is much better choice than a dediseedbox.

Why shared seedbox is often better choice?


  • Ease of use
  • Instant availability
  • Cost
  • Better burst capabilities
  • Sometimes you get practically a dedi for shared money

Easy to use

Most of the times a dedi doesn't come setup, it's just a plain install, and you need to know how to install everything etc.
You could go with something like PMSS, but it still requires you to set it up and knowing how to administer the server, updates, creating users etc.

Shared service has already all of this covered.

Instant availability

Shared and semidedi seedboxes are usually instantly available, no waiting period etc. just login and start using it. Even if a dedicated server is provided to you within seconds like many providers do, you still need to install everything and that could easily take up an hour or two.

Cost

Obviously a shared service is going to be much cheaper almost all of the time. Certainly, there is ultra cheap dedis on the market as well, but they often have a setup fee associated with them or other hardships, and often are so weak as to be barely usable.

Better peak performance

Say you get a 10Gbps seedbox from someone like Feralhosting, you are going to see 5Gbps+ peaks at least down. To get a dedicated like that would cost you hundreds a month, but with someone like Feralhosting it's going to be tens of euros. Sure, they have huge markup, and huge amount of users per server to be able to offer that service for tens of euros, but you will also benefit from their optimizations.

There is also plans where you are going to see ultra stable performance despite being shared, for example a 1Gbps seedbox with just 100Mbps upstream capability and traffic limited seedbox, so you get a nice middle ground between dedicated and shared :)

Sometimes you get tons more than you bargained for

With shared services you have the potential to get tons more than what you bargained for, sometimes the other users on the server barely use any resources what so ever, or it's mostly empty server waiting for new signups. Think about it, dediseedbox for shared seedbox money! :O

6/21/2014

How Copyright laws and lobbyists stiffle innovation

TorrentFreak reported that Flixtor and Torrentlookup was shutdown by the developers voluntarily due to pressure from MPAA. They were looking down the barrel of a huge lawsuit and were not capable to defend themselves in courts.

I haven't seen nor used Flixtor, but from the screencap and description i gather it was quite a bit of innovation - a competing product for Netflix but using torrents as the distribution mechanism and apparently free for users.

Now this is the kind of innovation we need, but is being stiffled by MPAA, RIAA and the likes. Just imagine,easy to use app for the newest movies and a huge selection, powered by bittorrent on the backend hence costing 0 for the developers to handle the distribution of data, and free for users.

Why, couldn't they just work with them? Make some kind of deal to keep this innovation moving. The irony is that this is the kind of innovation which actually is in the best interest of copyright holders, really the only question should be how do we monetize this platform where our releases freshly available and has a huge market, and has zero distribution costs, worldwide access.  Even if the copyright holders would receive just few cents per viewing of a movie, they would stand to make a killing when something like this becomes popular enough.
Since distribution is powered by Bittorrent, it's accessible from China, India etc. at the same flat rate, and those are some huge markets out there.

6/17/2014

100Tb Servers No More!

In the past there used to be 1Gbps 100Tb Servers, which meant these dedicated servers had traffic limit. Usually you could not even come close to that traffic limit on a dedicated server.

Today's trend however are 1Gbps burstable and strict limits on the total bandwidth of 150-300mbps.

Traffic limits and traffic overages are a thing of the past, and the seedbox community thanks thee!

6/15/2014

Wordpress suing for wrongful, abusive DMCA complaint

WordPress  has had it with fraudulent DMCA claims and they are suing Nick Steiner from Straight Pride UK for abusive DMCA takedown notice.

Oliver Hotham wrote an article about "Straight Pride UK" organization, with included a quote from Nick Steiner. Nick Steiner disliked the article and sent in a DMCA takedown notice to Wordpress in order censor it.

Automattic, the company who develops Wordpress and the services teamed up with the Journalism student Oliver Hotham and sued for $10,000 in damages and $14,520 in attorneys' fees.

Automattic's general counsel Paul Sieminski said
“The system works so long as copyright owners use this power in good faith. But too often they don’t, and there should be clear legal consequences for those who choose to abuse the system,”
 Which essentially means the system is broken from the get-go, especially considering that failure to act on DMCA notices removes the service provider from the Safe Harbor provided by DMCA, and the DMCA abuse penalty by default is "under penalty of perjury" - a slap to the wrist for abusers.

Hence the system is inherently and completely broken: Service providers basicly have no means to try to protect their users, and any one claiming copyright, rightful copyright holder or not, and whether it is a actual copyright complaint or not can take any content down. In theory, you could ask a company to takedown their own logo and they'd be forced to do so, and only then can they sue you for damages, in essence, censorship and vandalism worked.

This doesn't stop with big organization etc. any individual can claim copyright to any work and have it taken down. Infact, if you send a complaint to blogger about this post it will be taken down.

Source: TorrentFreak.

6/13/2014

New Kimsufi dedis

Been using a new kimsufi 500G dedi. that's insanely affordable at 4,99€ a mo + VAT. You can't really get the VAT removed in any case, they don't respond to those requests even if you are eligible for vat removal and support... you can forget getting any even semi-decent support from them, it's basicly just a forum these days.

Anyways, the server was setup fast, the new management gui is convenient and quick to use, very simple, but also limited and slightly buggy in it's attempt to be so dynamic.

for basic servers a no frills no chills service which offers quite a lot for the money - but knowing from the history if you use a lot of bandwidth they might demand at any moment more money from you.

For something basic like a IRC shell box and "file locker" it's very nice :) but for a bit more punch and stability i'd look elsewhere.

6/12/2014

Dedicated Seedbox vs Shared Seedbox

Dedicated or Shared Seedbox? That's quite an question!

It all depends on your budget and what you are after. If you want stable speeds and lots of resources, and got a bit of budget - then definitively a dedicated seedbox is your option!

If you need more burst speeds, and don't mind sharing the connection, or you are budget limited then shared seedbox is your choice.

Technically they differ in that dedicated seedbox has dedicated resources, dedicated hardware *just* for you, so for the money it may not offer as high peak speeds or performance, but you will not share with anyone else neither.

Shared seedbox is a slot, or slice, in a server with multiple users. It does not mean you share an account with another user, it merely means that there is a multitude of users on that same server which may affect your speeds.

If you are worried that no bandwidth will be available for you to use, to hit an server with heavy users then a good middle point choice is a seedbox with traffic limits. The traffic limits ensure that a single user may not use 100% of the server resources, as they will get limited if they do too much traffic.