This is another in a series of layoffs over the last year or two. They've tried to keep them relatively quiet.
They've reimagined the company as providing support on third-party platforms, namely AWS and Azure, instead of focusing on building, selling and supporting their own offerings. That model requires significantly fewer employees. That change, coupled with the buyout, means that there's no surprise in seeing layoffs at Rackspace, and there are likely more to come.
Private equality bought Rackspace out. Layoffs are part of that playbook. They basically pump cash in to grow sales, while cutting costs, so they can chop it up and sell it off for a multiple of what they paid for it.
They usually borrow money at really high leverage to buy the company so they risk very little of their own capital. Then they cut anything and everything to minimize costs (and choke off the ability to compete in the future) then dump the carcass(es) on whatever suckers they can find because they made the balance sheet look better temporarily.
Banks line up to lend them money because there is a never-ending supply of suckers and these firms usually manage to find one before the inevitable crash and burn.
Worked at a place that was bought out by private equity right at the apex of the tech bubble. As you can imagine, the value of what they paid for dropped like a rock right after the check was cashed. So, like working at a post PE buyout company, only amplified. They had little cash left to pump sales, so all the effort went to cutting costs.
Yes a firm called Apollo Global Management. I have to wonder what their end games is though. Apollo chops up, repackages with other pieces of their portfolio and then sells. I have to wonder where Rackspace fits into that.
So why aren't the sellers before PE and buyers after PE aware of the fact that no value is being added by PE in the interim, and hence sell cheaply and overpay?
Why is it so obvious to you, but not these other people? How does this charade perpetuate? Maybe there is value in there somewhere?
People selling to PE can usually see the writing on the wall, so they're eager to leave with whatever cash they can still get. That makes for low prices.
Buyers always think that they have some insight about the business that can materialise some yet-unrealised potential. You've probably done it yourself a few times, "if only Company X did this and that, instead of what they're doing now -- they'd make loads!". Everyone knows PE don't know how to run businesses beyond the basics, so buyers will always see untapped potential. That makes for high prices.
> They've reimagined the company as providing support on third-party platforms, namely AWS and Azure
To be fair, Rackspace's market cap was only about 4 billion at time of acquisition. They don't really have the capital to get into a war with Amazon, Google, or Microsoft, when data centers cost hundreds of millions/billions to build.
The economies of scale they're fighting against are gigantic, so it makes sense to go towards supporting the clouds of other companies rather than building out their own bare metal.
It also requires very different types of employees
The sales engineer / presales architect working to design a cross-cloud solution for a customer is a very very different kind of "smart" from the Linux geeks that keep your systems running once they're deployed
Neither is better than the other, mind you - they're just different
I think it is important to note that Rackspace got into AWS and Azure services, because the dedicated physical servers business was dead man walking. While Rackspace did introduce virtualization and other services running on their own hardware, they would never be able to touch Amazon's industry sized scale.
My last Rackspace experience was awful. We signed with them because they said they had the ability to mitigate DDOS attacks. A competitor of ours was trying to push os of out business with downtime.
We told them the strength of the DDOS attacks and they said they could mitigate it. We signed a contract, got DDOS and we were null routed by them. I got to talk to one of their techs and they said they couldn't actually mitigate DDOS attacks of that strength. He told us to use DDOS Arrest which worked. So they lied about their ability to mitigate DDOS and were stuck with this expensive 2 year contract.
Then a month later our site went down for 48 hours, and it was because they had a dead router in their internal network, but they insisted nothing was wrong and I had to debug the issue myself to get them to find the issue.
My last experience with Rackspace was I think emblematic of their new direction.
They got into doing AWS consulting and sold the startup I was working for an AWS setup that was about 100x too large and overengineered for their static site.
Prior, I had used them for a number of projects and always found them to be professional and helpful. I think the old Rackspace is gone.
Yeah, San Antonio is almost certainly a better description of the location. Rackspace's HQ is technically in the incorporated city of Windcrest, but unless you've lived in the San Antonio MSA, you've probably never heard of Windcrest. It's a two square-mile city of 5300 people entirely surrounded by San Antonio and its MSA population of 2.3M people. Windcrest consists of a single suburban residential neighborhood and a couple retail strip centers. Rackspace bought an empty shopping mall on the very edge of the city that had been vacant for about two years for its HQ. Given the retail decline in the area, it was, no doubt, a coup for Windcrest. I grew up just north of Windcrest and used to go to that mall all the time although it was on the decline even 25 years ago.
Being incredibly pedantic here, but it's actually not. San Antonio almost entirely surrounds Windcrest, but Windcrest is a separate city. There are a few places like this in San Antonio. The city chose not to incorporate these areas.
I know this because I worked at Rackspace, and one of the larger means of income for the city of Windcrest is semi-bogus traffic tickets given to people coming and going from Rackspace headquarters.
I can't tell if you're trying to troll here, but the comment you're replying to said Windcrest is a separate city (it is, with a Mayor and City Council, per https://en.m.wikipedia.org/wiki/Windcrest,_Texas), and you reply that it's part of the San Antonio MSA, which is true but not what was being discussed.
I didn't ever live in or near San Antonio (or Windcrest!), but I did live in Austin and Houston so I am familiar with the Texas phenomenon of cities entirely or almost entirely surrounded by much larger cities. For instance, near Houston there's Bellaire and West University Place.
The comment being replied to said "Well, Windcrest is a subset of San Antonio, Texas." Not "the city of San Antonio, Texas." Both officially and unofficially, San Antonio is the name of not just an incorporated city, but of the metropolitan area of which it's the largest city. When you say that the metro area, as opposed to the city, was "not what was being discussed," that was an assumption the comment I replied to was making. But nowhere is that said in the comment that started this thread.
The principle of charity (https://en.wikipedia.org/wiki/Principle_of_charity) "requires interpreting a speaker's statements to be rational and, in the case of any argument, considering its best, strongest possible interpretation." The interpretation of the statement being replied to that makes it true is "Well, Windcrest is a subset of the San Antonio, Texas metro area." The "pedantic" comment I was replying to ignored that possible interpretation in favor of an interpretation that made the comment false. But why not interpret the comment in such a way that it's true, given that there's a very common way of reading that (San Antonio the metro area) where it's so?
I'm not sure what you're attempting to show with that link. Yes, it's part of the San Antonio metro area. It's not part of the city San Antonio, Texas. Sorry if I was unclear in the distinction I was making, I thought I communicated that.
Thanks. I can't fathom why it's so hard to imagine people not knowing wtf local might mean. There were a bunch of aws admin rackers in SF. I wonder what happened to them.
We're using Rackspace Cloud Files. If they ever end up shutting down or something, what are our alternatives? We mainly use Rackspace Cloud Files CDN to deliver assets to our clients (JS, CSS, images).
Funny that AWS support is a growing business for them.
We actually gave up managed bare metal hosting with another provider for AWS a few years ago, when we found that AWS's automated infrastructure had reached a point wherein self-management was a viable option at a much lower cost.
Granted, our installation is straightforward, but overall, it seems AWS would reduce the need for managed services.
Speaking from an enterprise point of view, that makes all the difference. A lot of companies will have mishmashes of different systems with one-off setups that are not worth the effort of automating.
The main selling point of AWS, from an enterprise perspective, is to shift costs in the right balance-sheet column, lower build times, and lower headcount because you can cut the hardware boys. Everything else (automation etc) is entirely optional and only worth the effort in some particular circumstances.
It's always worth it to automate, just adjust your timeframe and internalize the admin cost. As someone who has done it both ways, let me assure you --you don't want had crafted bespoke servers. You want generic and repeatable.
All this hiring and firing makes me think that the economy should err towards gigs rather than FT employment.
i.e. Full-time work is only full-time until it ends, but the ending is inadequately spelled out on the side of management, so it becomes an ongoing expense drag, leading to diminishing returns.
Multiply this by the number of employees, layoffs seem inevitable.
"All this hiring and firing makes me think that the economy should err towards gigs rather than FT employment."
The danger of this (to the employees) is a recession creating another 2009-like economy, where the best one could hope for is a contract gig of more than a couple of months. People tend to forget how disposably they can be treated during hard times.
Good point. Although, I tend to think the timespan of contracts, whether employment or temporary, are of little moment if the employer were to fall on hard times. If they don't have the cash, termination is going to happen either way.
What might work is automated Just-in-Time contracting, with minimal agency cost and lead-times, and with automation providing the stability of full-time employment. LinkedIn 2.0 essentially (fyi: this doesn't exist, I just made it up)
It is the other way around. OpenStack is dying, we're just watching how vendors respond.
Rackspace and HP needed OpenStack to take off in order to generate demand for their public clouds. When that didn't happen HP shut down its cloud. Rackspace couldn't do that, their cloud accounted for too much of their revenue, so what we're seeing is their fallback plan.
Interesting anecdote: yesterday the OpenStack Foundation sent out an email announcing the speakers for the upcoming OpenStack summit. The subject line: "Hear from Google's VP of Cloud Platforms at OpenStack Summit Boston".
Exactly. Openstack's strength building out a private cloud infrastructure. Deploying it is still a major undertaking (wish things like fuel were more reliable).
I know Amazon AWS has most of their core engineering talent in Austin. The scene is small but strong, this doesn't really reflect on the hiring climate in ATX.
As a true Austinite, I never really considered San Antonio a "tech hub" or indicator of anything in tech.
We've been with Rackspace since 1999, but I'm getting concerned about the future of the company. Can anyone recommend a comparable managed host for Windows?
I don't have services like Pingdom or Pager Duty for my sites. I have Rackspace. If a site goes down, they immediately attempt to recover it, acting within the account rules I set up with them. If that fails, I get a phone call from an expert and we troubleshoot together.
I once got a call that one of my servers had started sending out a high volume of email. When I took the call, Rackspace folks had already found the nasty script doing it and just needed my ok to take the site down for a few minutes to boot it.
The service is really great. But, it ends just below the application layer, and these days, that's where most of the problems show up. So I am looking at moving off of Rackspace, but I'm looking up the service ladder at application-aware hosting services like WP Engine or Pantheon. As opposed to down the service ladder, to a self-service "don't call us" system like AWS or Google.
Coming from a previous Rackspace customer (over 10 years), I wouldn't at this point. AWS cloud offerings are superior to Rackspace cloud and LiquidWeb dedicated servers are much more cost effective with decent support.
Their support used to be the best in the business and we were willing to pay a premium for that but even their support started to suffer starting early 2015.
It is unfortunate because Rackspace was an amazing partner to our business for a long-time, which made it a very hard choice when we decided to move on.
I was there years ago but the main reason was support. I worked on the Enterprise/Custom (largest clients, minimum 5 figure MRR/month) team and the phones were staffed by actual System Engineers/Admins 24/7. It might be one guy overnight but if you were the CTO of $bigname and needed something done, from SQL restores, migrations, etc all you had to do was dial in. No phone tag, no wait, no voicemails.
We were very expensive but the companies must have saved money in hiring their own Engineers as were always growing.
AWS/GCP/Azure have made doing a lot of what we used to do incredibly more accessible to more people.
You don't. I just moved a WP multisite from Rackspace Cloud to AWS. Load times down 50%. Cost down 90%. Hosting is not, and has never been, an easy business.
That seems off. It's been 2 or so years now, but I worked for a company that had a five digit monthly bill from both Rackspace and AWS, and our costs to migrate from one to the other were within 15% when we did the math.
90% seems way off. AWS is known for not being cheap (neither is Rackspace), but I highly doubt that Rackspace was ten times the price of Amazon.
To clarify, in all liklihood - you would be doing top tier on spot, and then 2nd and third via other methods... but there are ways to get a system to really handle spot loss...
There are MANY applications that simply are too complex to handle such an arch... so, - do so, when you can, but know when that's just stupid.... stacks are like snowflakes, we LOVE them to death, or we love them to DEATH...
Uh, AWS is absolutely known for being cheap. I'm in the hosting business as well and our AWS offerings are a fraction of the cost of other solutions, especially when you consider what you get.
With AWS/Google, you don't have to worry about hardware and OS, that's the entire point of the cloud. You are more focused on your apps, of course it costs a bit more, but hopefully you save it by not having to deal with hardware or OS issues.
Sure, there's Lambda but if you need EC2 then you do need to worry about the OS. And while you do not need to worry about the hardware breaking down, that's the same when you are renting a physical or a virtual server because you need to build a redundant infra anyways and whether the fallen node comes back in an hour or you can spin it up yourself is of little consequence. And you still need to understand your hardware if you want performance.
If your talking about ec2 instances, its really the hardware and physical networking you don't have to worry about. You manage the virtual networking, hosts, etc.
@devopsproject I don't think users/companies value colo & support ^until^ bad things happen and the costs of not running critical business are evident.
Sort of off topic, but can confirm. I've had issues with GCP where site is down and it's taken 1+ days to hear back from someone.
Maybe there was a way to escalate it, but it wasn't readily evident. Which tells you something. So, I just moved on to AWS. More transparency on the platform and more predictable support.
> So, I just moved on to AWS. More transparency on the platform and more predictable support.
Bad news. AWS is no different. EC2, IAM, whatever will have issues for hours, AWS won't update their status board (to the point someone wrote an extension to show the "real" status [1]), etc. And this is with business support we pay for. You needed to autoscale? Better go make yourself a mojito and wait for the dust to settle.
"There is no cloud, its just someone else's computer" [2]
"I've had issues with GCP where site is down and it's taken 1+ days to hear back from someone.
Maybe there was a way to escalate it, but it wasn't readily evident."
In the end someone has a pager, a machine and an editor. In ye-old-days this problem was in-house with a COLO machine. That's the reasoning behind uptime. 99%/year is 361 and just over half a day. Can you loose two and a half days at christmas? (web commerce) or the same time on a new product release? (SaaS) or 2.5 days development time at a critical bug fix with your customers?
The bad stuff always happens at the worst time. The market is saying, ^cloud^ but the tradeoff is ^service^. I don't know the answer(s).
Speaking as a Rackspace customer, if any of these guys are Linux techs, and you're looking, hire them. With signing bonuses.
Anecdote: A couple years ago, I had one explain to me (in a way that made sense) how the battery on the raid array was probably the cause of some problems with https. And _he was right_.
Maybe not everyone there is a certified genius, but that really blew my mind. They really know hardware. I haven't talked to one who couldn't save my tail in a pinch. Rackspace might seem a bit on the expensive side, but their support is absurdly good.
Former Racker here, while I wouldn't agree all of their techs are amazing, what I liked about working there was the vast amount of knowledge spread between teams - coupled with Texan kindness and hospitality. Generally speaking any answer I needed I could get from a phone call or a few IMs.
Working there was a great learning experience, and I met a ton of super intelligent, motivated people from the Austin office.
But, all good things come to an end. Early 2015 I noticed talent being pushed towards newer offerings (e.g. Azure support), which made the typical, aging, "Linux support for small to medium business on bare metal machines" not so 'fanatical' anymore. People started to jump ship because the force behind the changes. It was a tough transition and probably still is; so I'm not really surprised that the layoffs are continuing.
In any case, I appreciate your compliment and I'm glad you have had a good time with the company.
I was a previous Rackspace customer for over 10 years and at one time had 9 dedicated servers with them. Their Linux techs were top notch. IMO you are smart to want to obtain some of them.
Unfortunately stating in early 2015 the support started to deteriorate. That coupled with the cost we were paying for their dedicated servers/fanatical support stopped making since.
My experience has been sadly similar. We used to have probably 30 servers at the peak across several client accounts.
But yup, around the first part of 2015 what used to be a one ring call to connect with an expert became 5 minutes, then 15, and a couple weeks back I waited 45 minutes to talk to a Linux tech. Unfuckingacceptable, not as a "managed" customer paying an extra $100/mo and $0.12. We've slowly but surely moved clients to other hosts and I can count the servers still at Rackspace on one hand and still have fingers.
Yep. I know a guy who was a top level tech guy there and ran a big hardware lab and did support for tough cases. He left, according to him, because management just didn't understand the tech side anymore and was implementing all kinds of new things the support and tech guys didn't like and talent had been leaving for a while.
A failed battery on a nicer RAID controller will usually cause its overall performance to drop quite a bit.
When a program writes bytes to the file system, those bytes aren't guaranteed to arrive on the underlying, permanent storage. They're cached for some variable amount of time in the memory of the host operating system.
In UNIX, calling fsync will force the OS to flush any outstanding writes to the underlying storage, which should mean they're safe. So far so good.
Nicer RAID controllers have a lot of high speed memory memory, and they are basically their own operating system. When they receive a write from the host OS, and their battery is good, they lie to the host OS and say that the write went through all the way to disk/SSD, even though it probably didn't. This has the effect of making the fsync call return a lot more quickly.
The bytes are still safe in this case because if there's a power loss of the whole server, there's enough battery power left to allow the controller to write through any data it lied about writing.
When the battery goes bad, by default the RAID controller will cease its lying about when the bytes it receives are written all the way through, which has the effect of slowing things down.
I can't say how lower IO performance caused the https problem in question without more details, but there are a lot of possible correlations.
NB: For brevity, I simplified a lot of how this stuff works, so some of what I said above isn't 100% accurate as stated.
Not the OP, but a guess: A lot of RAID cards will switch from writeback mode to writethrough when the battery is low, because they can't guarantee that the (battery-backed) cache will persist across a power failure. Writethrough mode makes writes significantly slower, which could cause problems higher up the stack.
Yep they will also go write through during a battery "learn cycle". The DELL PERC controllers are notorious for this. And there is poor visibility into this. You basically have to know about the awful MegaCli utility to gain any insight.
What is the learn cycle?
"The purpose of the learn cycle is to determine the condition of the battery. The learn cycle charges, fully discharges, and then recharges the battery in order to determine the condition and health of the battery. The battery full charge capacity degrades over time and a battery is deemed completely degraded when it can no longer hold a charge for 24 hours and must be replaced."
What kind of trash-tier reliability-increasing hardware stops working for a day every 90 days? I'm really surprised they didn't put two or three batteries into it.
All of them. You learn to disable learn cycle, most people learn that the hard way. I switched to flash backed cache for this reason (along with being sick of replacing batteries)
Guessing the battery died and the clock reset throwing off a https parameter. Happens to my daughter's tablet when she forgets to charge it and then she can't connect to https sites without resetting the clock.
Why on earth is this answer downvoted? I'm pretty sure clock resets can cause HTTPS problems (even though they are unlikely to be in the OP's case, it being a server).
And if you disagree, please explain...so that we can all learn and understand.
Raid battery has nothing to do with the clock. Even the motherboard battery affects the clock only when the machine is turned off. Normal systems connected to the internet will sync to NTP both at boot time and continuously afterwards anyway.
Also the server time doesn't matter for TLS. It's the client that has to verify the certificate validity. (https://tools.ietf.org/html/rfc5246.html#section-7.4.1.2 "Clocks are not required to be set correctly by the basic TLS protocol")
So no, raid battery should have nothing to do with the system clock.
This has popped up a couple of times and I never got around to confirming either way. I always went to the trouble of syncing to rule it out before moving on. (sometimes annoying on an isolated test network)
Thanks for taking the time to link the relevant section of the RFC. I'll remember this next time (or at least know where to check quickly ;) )
1. These things would still have AC power. The battery is more like a UPS battery - the only way for clock reset to happen is for both external and battery power to fail, at which point it will die from lack of power anyhow.
2. The clock on the server wouldn't have anything to do with the RAID array. If I disconnect a hard drive, my motherboard still gets along fine.
I used them for email for a while and I couldn't get my mail client to connect (Airmail, first gen). The guy on the phone spent hours with me and even purchased a copy of the app just to see if it would work for him. I don't remember what ultimately fixed it but I got it working - and it wasn't even a rackspace problem if I recall correctly. Still impressed by how far they were willing to go to help a customer.
I agree. Their support is always really great. Was the battery causing the time to drift or something, so that the SSL stuff saw the wrong time & broke? Just wondering how a battery could mess that up!
I don't remember now to be honest. I should have written it down :D It was an early interaction with them, and I didn't seriously think he would be right.
Reading this makes me realize how naive I was. I thought I could rent a cheap dedicated server from OVH and leverage Docker to build nice infrastructure without paying too much. Better staying with Google/AWS then.
> “We were very intent on preserving as many Racker positions as possible in our customer-facing roles, and we’re confident we are not going to affect fanatical support for our customers,” the spokesperson said.
At their HQ, they gave out a 'fanatical support' award which was a straightjacket, with some tribal backstory related to being 'so fanatical that you're insane'.
could you explain what's bizarre about the branding? Do you not like the word 'fanatical'? (which just means 'filled with excessive and single-minded zeal'/'obsessively concerned with something.').
> what's bizarre about the branding [... 'fanatical'] which just means 'filled with excessive and single-minded zeal'
Branding your support 'excessive' or 'single-minded' would also be bizarre.
Fanaticism is typically taken with religious or political connotations, and derogatorily.
Cambridge [0] says:
> [informal] extremely interested in something, to a degree that someone [sic] people find unreasonable
> [disapproving] holding extreme beliefs that may lead to unreasonable or violent behaviour: a fanatical group that has threatened to assassinate doctors
Oxford [1] says:
> A person filled with excessive and single-minded zeal, especially for an extreme religious or political cause: 'religious fanatics'
> ... originally described behaviour that might result from possession by a god or demon ...
(Oxford [1])
Add '~atic' to any of those and it's sounds much more extreme; something to disapprove of.
I quoted Cambridge online dictionary above as defining it as disapproving, the same dictionary defines 'fan' exactly as you use it, and does not give 'fanatic' as a synonym.
Of course it originates as an abbreviation, but the usual usage is very different.
Certainly here in Australia I don't think "football fanatic" would be considered derogatory or disapproving. "Religious fanatic" is a different question, of course, but I happen to think Rackspace's branding is perfectly fine!
It really doesn't, everyone knows that football fanatic is just a big fan of football. Same for any other interest. It's hyperbole but one that everyone knows is hyperbole. Strange that you think otherwise to be honest.
They've reimagined the company as providing support on third-party platforms, namely AWS and Azure, instead of focusing on building, selling and supporting their own offerings. That model requires significantly fewer employees. That change, coupled with the buyout, means that there's no surprise in seeing layoffs at Rackspace, and there are likely more to come.
I'm a former Racker.