and has easy to use multi-regional support at a fraction of the cost of what it would take on AWS. I directly point my NAS box at home to GCS instead of S3 (sadly having to modify the little PHP client code to point it to storage.googleapis.com), and it works like a charm. Resumable uploads work differently between us, but honestly since we let you do up to 5TB per object, I haven't needed to bother yet.
Again, Disclosure: I work on Google Cloud (and we've had our own outages!).
By boulos 8 years ago
Apologies if this is too much off-topic, but I want to share an anecdote of some some serious problems we had with GCS and why I'd be careful to trust them with critical services:
Our production Cloud SQL started throwing errors that we could not write anything to the database. We have Gold support, so quickly created a ticket. While there was a quick reply, it took a total of 21+ hours of downtime to get the issue fixed. During the downtime, there is nothing you can do to speed this up - you're waiting helplessly. Because Cloud SQL is a hosted service, you can not connect to a shell or access any filesystem data directly - there is nothing you can do, other than wait for the Google engineers to resolve the problem.
When the Cloud SQL instance was up&running again, support confirmed that there is nothing you can do to prevent a filesystem crash, it "just happens". The workaround they offered is to have a failover set up, so it can take over in case of downtime. The worst part is that GCS refused to offer credit, as according to their SLA this is not considered downtime. The SLA [1] states: "with respect to Google Cloud SQL Second Generation: all connection requests to a Multi-zone Instance fail" - so as long as the SQL instance accepts incoming connections, there is no downtime. Your data can get lost, your database can be unusable, your whole system might be down: according to Google, this is no downtime.
TL;DR: make sure to check the SLA before moving critical stuff to GCS.
I've used both Google Cloud and AWS, and as of a year or so ago, I'm a Google Cloud convert. (Before that, you guys didn't at all have your shit together when it came to customer support)
It's not in bad taste, despite other comments saying otherwise. We need to recognize that competition is good, and Amazon isn't the answer to everything.
By JPKab 8 years ago
The brilliance of open sourcing Borg (aka Kubernetes) is evident in times like these. We[0] are seeing more and more SaaS companies abstract away their dependencies on AWS or any particular cloud provider with Kubernetes.
Managing stateful services is still difficult but we are starting to see paths forward [1] and the community's velocity is remarkable.
K8s seems to be the wolf in sheep's clothing that will break AWS' virtual monopoly on IaaS.
[0] We (gravitational.com) help companies go "multi-region" or on-prem using Kubernetes as a portable run-time.
I have a component in my business that writes about 9 million objects a month to Amazon S3. But, to leverage efficiencies in dropping storage costs for those objects I created an identical archiving architecture on Google Cloud.
It took me about 15 minutes to spin up the instances on Google Cloud that archive these objects and upload them to Google Storage. While we didn't have access to any of our existing uploaded objects on S3 during the outage, I was able to mitigate not having the ability to store any future ongoing objects. (our workload is much more geared towards being very very write heavy for these objects)
It it turns out this cost leveraging architecture works quite well as a disaster recovery architecture.
By blantonl 8 years ago
Opportunistic, sure. But I did not know about the API interoperability. Given the prices, makes sense to store stuff in both places in case one goes down.
By sachinag 8 years ago
Not poor taste at all. Love GCP. I actually host two corporate static sites using Google Cloud Storage and it is fantastic. I just wish there was a bucket wide setting to adjust the cache-control setting. Currently it defaults to 1 hour, and if you want to change it, you have to use the API/CLI and provide a custom cache control value each upload. I'd love to see a default cache-control setting in the web UI applying to the entire bucket.
I also want to personally thank Solomon (@boulos) for hooking me up with a Google Cloud NEXT conference pass. He is awesome!
By nodesocket 8 years ago
Hopefully you're still there even though S3 is back up. I have an interesting question I really, really hope you can answer. (Potential customer(s) here!!)
There are a large number of people out there looking intently at ACD's "unlimited for $60/yr" and wondering what that really means.
I recently found https://redd.it/5s7q04 which links to https://i.imgur.com/kiI4kmp.png (small screenshot) showing a user hit 1PB (!!) on ACD (1 month ago). If I understand correctly, the (throwaway) data in question was slowly being uploaded as a capacity test. This has surprised a lot of people, and I've been seriously considering ACD as a result.
On the way to finding the above thread I also just discovered https://redd.it/5vdvnp, which details how Amazon doesn't publish transfer thresholds, their "please stop doing what you're doing" support emails are frighteningly vague, and how a user became unable to download their uploaded data because they didn't know what speed/time ratios to use. This sort of thing has happened heaps of times.
I also know a small group of Internet archivists that feed data to Archive.org. If I understand correctly, they snap up disk deals wherever they can find them, besides using LTO4 tapes, the disks attached to VPS instances, and a few ACD and GDrive accounts for interstitial storage and crawl processing, which everyone is afraid to push too hard so they don't break. One person mentioned that someone they knew hit a brick wall after exactly 100TB uploaded - ACD simply would not let this person upload any more. (I wonder if their upload speed made them hit this limit.) The archive group also let me know that ACD was better at storing lots of data, while GDrive was better at smaller amounts of data being shared a lot.
So, I'm curious. Bandwidth and storage are certainly finite resources, I'll readily acknowledge that. GDrive is obviously going to have data-vs-time transfer thresholds and upper storage limits. However, GSuite's $10/month "unlimited storage" is a very interesting alternative to ACD (even at twice the cost) if some awareness of the transfer thresholds was available. I'm very curious what insight you can provide here!
The ability to create share links for any file is also pretty cool.
By i336_ 8 years ago
Now that's what I call a shameless plug!
By ptrptr 8 years ago
We would definitely seriously consider switching to GCS more if your cloud functions were as powerful as AWS Lambda (trigger from an S3 event) and supported Python 3.6 with serious control over the environment.
By scrollaway 8 years ago
I keep telling people that in my view, Google Cloud is far superior to AWS from a technical standpoint. Most people don't believe me... Yet. I guess it will change soon.
By simonebrunozzi 8 years ago
I'm in the process of moving to GCS mostly based on how byzantine the AWS setup is. All kinds of crazy unintuitive configurations and permissions. In short, AWS makes me feel stupid.
By joshontheweb 8 years ago
As far as I understand the S3 API of Cloud Storage is meant as a temporary solution until a proper migration to Google's APIs.
The S3 keys it produces are tied to your developer account. This means that if someone gets the keys from your NAS, he will have access to all the Cloud Storage buckets you have access to (e.g your employer's).
I use Google Cloud but not Amazon. Once I wanted a S3 bucket to try with NextCloud (then OwnCloud). I was really frightened to produce a S3 key with my google developer account.
By andmarios 8 years ago
"fraction of the cost" - how do you figure? Or are you just saying from a cost-to-store perspective?
Your Egress prices are quite a bit more compared to CloudFront for sub 10TB (.12/GB vs .085/GB).
The track record of s3 outages vs time your up and sending Egress seems like S3 wins in cost. If all your worried about is cross region data storage, your probably a big player and have AWS enterprise agreement in place which offsets the cost of storage.
By rynop 8 years ago
So this is more compute related but do you know if there are any plans on supporting the equivalent of the webpagetest.org(WPT) private instance AMI on your platform?
Not only is webpagetest.org a google product but it's also much better suited for the minute by minute billing cycle of google cloud compute. For any team not needing to run hundreds of tests an hour the cost difference between running a WPT private instance on EC2 versus on google cloud compute could easily be in the thousands of dollars.
By Spunkie 8 years ago
Would use Google but I just can't give up access to China. Sad because I also sympathize with Google's position on China.
By malloryerik 8 years ago
boulous not in bad taste at all - happy google convert and gcs user works very well for us ymmv
By zoloateff 8 years ago
If you made a .NET library that allows easily connecting to both AWC and GCS by only changing the endpoint I would certainly use that library instead of Amazon's own.
Just saying, it gets you a foot in the door.
By DenisM 8 years ago
I had no idea this was an option. Great to know!
By danielvf 8 years ago
i have had problems integrating apache spark using google storage. especially because s3 is directly supported in spark.
if you are api compatible with s3, could you make it easy /possible to work with google storage inside spark?
remember i may or may not run my spark on Dataproc.
By sandGorgon 8 years ago
What is your NAS box doing with S3/GCS ?
By mbrumlow 8 years ago
S3 applications can use any object store if they use S3Proxy:
How about giving a timeline of when Australia will be launching? I see you're hiring staff, and have a "sometime 2017" goal on the site, but how about a date estimate? :)
By thejosh 8 years ago
Does GCS support events yet?
By philliphaydon 8 years ago
As Relay's chief competitor in this region, we of Windsong have benefited modestly from the overflow; however, until now we thought it inappropriate to propose a coordinated response to the problem.
By hyperpallium 8 years ago
What software are you using for your NAS box?
By espeed 8 years ago
Classy parley. I'll allow it.
By pmarreck 8 years ago
Competition is great for consumers!
By masterleep 8 years ago
S3 is currently (22:00 UTC) back up.
The timeline, as observed by Tarsnap:
First InternalError response from S3: 17:37:29
Last successful request: 17:37:32
S3 switches from 100% InternalError responses to 503 responses: 17:37:56
S3 switches from 503 responses back to InternalError responses: 20:34:36
First successful request: 20:35:50
Most GET requests succeeding: ~21:03
Most PUT requests succeeding: ~21:52
By cperciva 8 years ago
Thanks for taking the time to post a timeline from the perspective of an S3 customer. It will be interesting to see how this lines up against other customer timelines, or the AWS RFO.
By josephb 8 years ago
Playing the role of the front-ender who pretends to be full-stack if the money is right, can someone explain the switch from internal error to 503 and back? Is that just them pulling s3 down while they investigate?
By kaishiro 8 years ago
no. soundcloud uses aws s3. it is still down. this is false information.
By thenewregiment2 8 years ago
A piece of hard-earned advice: us-east-1 is the worst place to set up AWS services. You're signing up for the oldest hardware and the most frequent outages.
For legacy customers, it's hard to move regions, but in general, if you have the chance to choose a region other than us-east-1, do that. I had the chance to transition to us-west-2 about 18 months ago and in that time, there have been at least three us-east-1 outages that haven't affected me, counting today's S3 outage.
EDIT: ha, joke's on me. I'm starting to see S3 failures as they affect our CDN. Lovely :/
By gamache 8 years ago
Reminds me of an old joke: Why do we host on AWS? Because if it goes down then our customers are so busy worried about themselves being down that they don't even notice that we're down!
By traskjd 8 years ago
I'm getting the same outage in us-west-2 right now.
By xbryanx 8 years ago
My advice is: don't keep your eggs in one basket. AZs a localised redundancy, but as Cloud is cheap and plentiful, you should be using two or more regions, at least, to house your solution (if it's important to you.)
EDIT: less arrogant. I need a coffee.
By movedx 8 years ago
It shouldnt be technically possible to lose S3 on every region, how did amazon screw this up so bad?
By bischofs 8 years ago
Amen. We setup our company cloud 2 years ago in US-West-2 and have never looked back. No outage to date.
By twistedpair 8 years ago
Is us-east-2 (Ohio) any better (minus this aws-wide S3 issue)?
By compuguy 8 years ago
Probably valid, though in this case while us-west-1 is still serving my static websites, I can't push at all.
By jchmbrln 8 years ago
The s3 outage covered all regions.
By nola-radar 8 years ago
That's a really good point!
By notheguyouthink 8 years ago
I used to track DynamoDB issues and holy crap, AWS East had a 1-2 hour outage at least every 2 weeks. Never in any of the other regions. AWS East is THA WURST
By shirleman 8 years ago
The s3 outage covered all regions.
By nola-radar 8 years ago
Yup, same here. It has been a few minutes already. Wanna bet the green checkmark[1] will stay green until the incident is resolved?
In December 2015 I received an e-mail with the following subject line from AWS, around 4 am in the morning:
"Amazon EC2 Instance scheduled for retirement"
When I checked the logs it was clear the hardware failed 30 mins before they scheduled it for retirement. EC2 and root device data was gone. The e-mail also said "you may have already lost data".
So I know that Amazon schedules servers for retirement after they already failed, green check doesn't surprise me.
By emrekzd 8 years ago
It's crazy how much better the communication (including updates and status pages) is of the companies that rely on AWS than AWS' communication itself.
So, global S3 outage for more than an hour now. Still green, still talking about "US East issue". I'm amazed.
By Fiahil 8 years ago
Well, at least our decision to split services has paid off. All of our web app infrastructure is on AWS, which is currently down, but our status page [0] is on Digital Ocean, so at least our customers can go see that we are down!
EDIT UPDATE: Well, I spoke too soon - even our status page is down now, but not sure if that is linked to the AWS issues, or simply the HN "hug of death" from this post! :)
EDIT UPDATE 2: Aaaaand, back up again. I think it just got a little hammered from HN traffic.
By cyberferret 8 years ago
FYI to S3 customers, per the SLA, most of us are eligible for a 10% credit for this billing period. But the burden is on the customer to provide incident logs and file a support ticket requesting said credit (it must be really challenging to programmatically identify outage coverage across customers /s)
The dashboard not changing color is related to S3 issue.
See the banner at the top of the dashboard for updates.
So it's not just a joke... S3 being down actually breaks its own status page!
By geerlingguy 8 years ago
Thank god I checked HN. I was driving myself crazy last half hour debugging a change to S3 uploads that I JUST pushed to production. Reminds me of the time my dad had an electrician come to work on something minor in his house. Suddenly power went out to the whole house, electrician couldn't figure out why for hours. Finally they realized this was the big east coast blackout!
By jliptzin 8 years ago
Corporate language is entertaining while we all pull out our hair.
"We are investigating increased error rates for Amazon S3" translates to "We are trying to figure out why our mission critical system for half the internet is completely down for most (including some of our biggest) customers."
I've been fuzzing S3 parameters last couple hours...
And now it's down.
By maxerickson 8 years ago
All: I hate to ask this, but HN's poor little single-core server process is getting hammered and steam is coming out its ears. If you don't plan to post anything, would you mind logging out? Then we can serve you from cache. Cached pages are updated frequently so you won't miss anything. And please do log back in later.
(Yes it sucks and yes we're working on fixing it. We hate slow software too!)
By dang 8 years ago
"I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror, and were suddenly silenced. I fear something terrible has happened."
By greenhathacker 8 years ago
Down for us as well. We have cloudfront in front of some of our s3 buckets and it is responding with
CloudFront is currently experiencing problems with requesting objects from Amazon S3.
Can I also say I am constantly disappointed by AWS's status page: https://status.aws.amazon.com/ it seems whenever there is an issue this takes a while to update. Sometimes all you see is a green checkmark with a tiny icon saying a note about some issue. Why not make it orange or something. Surely they must have some kind of external monitor on these things that could be integrated here?
edit: Since posting my comment they added a banner of
"Increased Error Rates
We are investigating increased error rates for Amazon S3 requests in the US-EAST-1 Region."
However S3 still shows green and "Service is operating normally"
By chrisan 8 years ago
Sysadmin: I can forgive outages, but falsely reporting 'up' when you're obviously down is a heinous transgression.
Somewhere a sysadmin is having to explain to a mildly technical manager that AWS services are down and affecting business critical services. That manager will be chewing out the tech because the status site shows everything is green. Dishonest metrics are worse than bad metrics for this exact reason.
Any sysadmin who wasn't born yesterday knows that service metrics are gamed relentlessly by providers. Bluntly there aren't many of us, and we talk. Message to all providers: sysadmins losing confidence in your outage reporting has a larger impact than you think. Because we will be the ones called to the carpet to explain why <services> are down when <provider> is lying about being up.
By johngalt 8 years ago
They don't show it on the status dashboard at https://status.aws.amazon.com/ (at least at the time I originally posted this comment).
Edit 2: And now the event disappeared from my personal health dashboard too. But we are still experiencing issues. WTH.
By jrs235 8 years ago
It's interesting to note the cascading effects. For example, I was immediately hit by three problems:
* Slack file sharing no longer works, hangs forever (no way to hide the permanently rolling progress bar except quitting)
* Github.com file uploads (e.g. dropping files into a Github issue) don't work.
* Imgur.com is completely down.
* Docker Hub seems to be unavailable. Can't pull/push images.
By atombender 8 years ago
what's truly incredible is that S3 has been offline for h̶a̶l̶f̶ ̶a̶n̶ ̶h̶o̶u̶r̶ two hours now and Amazon still has the audacity to put five shiny green checkmarks next to S3 on their service page.
they just now put up a box at the top saying "We are investigating increased error rates for Amazon S3 requests in the US-EAST-1 Region."
increased error rates? really?
Amazon, everything is on fire. you are not fooling anyone
It's not just us-east-1! They're being extremely dishonest with the green checkmarks. We can't even load the s3 console for other regions. I would post a screenshot, but Imgur is hosed by this too.
By STRML 8 years ago
Its unreal watching key web services fall like dominoes. Its too bad the concept of "too big to fail" applies only to large banks and countries.
By rrggrr 8 years ago
Thanks for sharing. I overheard someone on my team say that a production user is having problems with our service. The team checked AWS status, but only took notice of the green checkmarks.
Through some dumb luck (and desire to procrastinate a bit), I opened HN and, subsequently, the AWS status page and actually read the US-EAST-1 notification.
HN saves the day.
By mabramo 8 years ago
Wow, S3 is a much bigger single point of failure than I have imagined. Travis CI, Trello, Docker Hub, ...
I can't even install packages because the binary cache of NixOS is down. Love living in the cloud.
By rnhmjoj 8 years ago
Notice how Amazon.com itself is unaffected. They're a lot smarter than us.
By benwilber0 8 years ago
And they've just broken four-9's uptime (53 minutes). They must be pretty busy, since they still haven't bothered to acknowledge a problem publicly...
By bandrami 8 years ago
Best thing about incidents like these: post-mortems for systems of this scale are absolutely fascinating. Hopefully they publish one.
By obeattie 8 years ago
This seems like an appropriate time as any... Anyone want to list some competitors to S3? Bonus if it also provides a way to host a static website.
Apple's iCloud is having issues too, probably stemming from AWS. Ironically Apple's status page has been updated to reflect the issue while Amazon's page still shows all green. https://www.apple.com/support/systemstatus/
By valine 8 years ago
Wow this is a fun one. I almost pooped my pants when I saw all of our elastic beanstalk architecture disappear. It's so relieving to see it's not our fault and the internet feels our pain. We're in this together boys!
I'm curious how much $ this will lose today for the economy. :)
Incredible how much stuff this affected for me. Opbeat is not loading and I can't even deploy because CircleCI seems to depend on S3 for something and my build is "Queued". This seems so dangerous...
By rawrmaan 8 years ago
It is, of course the checkmark will stay green throughout this as Amazon doesn't care about actually letting its customers know they have a problem.
By c4urself 8 years ago
Now might be a good time to ponder a lasting solution. Clearly, we cannot trust AWS, or any other single provider, to stay up. What is the shortest, quickest to implement, path to actual high availability?
You would have to host your own software which can also fail, but then at least you could do something about it. For example, you could avoid changing things during critical times of your own business (e.g. a tradeshow), which is something no standard provider could do. You could also dial down consistency for the sake of availability, e.g. keep a lot of copies around even if some of them are often stale - more often than not this would work well enough for images.
By DenisM 8 years ago
That sound you hear is every legacy hosting company firing up its marketing machine
By bandrami 8 years ago
Post about S3 not being a CDN hosted on an S3-powered blog:
But wait. Isn't S3 "the cloud". Everyone promised the cloud would never go down, ever. It has infinite uptime and reliability.
Well good thing I have my backups on [some service that happens to also use S3 as a backend].
By caravel 8 years ago
Not sure if its related or not (I'll just assume it is), but dockerhub is down as well. Haven't been able to push or pull for the last 15 minutes, some other folks complaining of the same thing.
By agotterer 8 years ago
Hi all. I came across this forum on Google. I have the same error - and it's all a bit beyond me. I'm not a techie or coder but set up Amazon S3 several months ago to backup my websites and it generally works fine - and has saved my bacon on a couple of occasions. (Also back up in Google Drive.)
As someone who's really only a yellow belt (assuming you're all black belts!), just so I understand ('cos I'm cacking myself!) ...
I'm seeing the same issue. Does this mean there's a problem with Amazon? I can't access either of my S3 accounts even if I change the region, and I'm concerned it may be something I've done wrong, and deleted the whole lot. It was working yesterday!!!
Would be massively grateful for a heads up. Thanks in advance.
By robineyre 8 years ago
> Update at 11:35 AM PST: We have now repaired the ability to update the service health dashboard. The service updates are below. We continue to experience high error rates with S3 in US-EAST-1, which is impacting various AWS services. We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue.
"Believe" is not inspiring.
By flavor8 8 years ago
From https://status.aws.amazon.com/:
"Update at 12:52 AM PST: We are seeing recovery for S3 object retrievals, listing and deletions. We continue to work on recovery for adding new objects to S3 and expect to start seeing improved error rates within the hour."
(I think the AM means PM)
By samaysharma 8 years ago
It looks like the S3 outage is spreading to other systems or the root cause of the S3 problem is affecting different services. There are at least 20 services listed now. [1]
It appears to be impacting gotomeeting, I get this error when trying to start a 12pm meeting here:
CloudFront is currently experiencing problems with requesting objects from Amazon S3.
Edit: ironically, my missed 12pm meeting was an Azure training session.
By vpeters25 8 years ago
Years ago when we launched our product i decided to use the US-WEST-2 region as our primary region and to build fail over to US-EAST-1 (Anyone here remember the outage of 2011? Yeah, that was why).
There is something to be said about not being located in the region where everything gets launched first, and where most the customers are not [imo all the benefits of the product, processes and people, but less risk].
Good luck to everyone impacted by this...crappy day.
By verelo 8 years ago
Status Pages (Services & Products affected by S3 outage)
This is why it's important to write code that doesn't depend on only a single service provider. S3 is great. But it's better to set up a Riak cluster on AWS than to actually use S3, if you can.
The only services my team uses directly are EC2 and RDS, and I'm thinking of moving RDS over to EC2 instances.
We are entirely portable. We can move my entire team's infrastructure to a different cloud host really quickly. Our only dependency is a Debian box.
I flipped the switch today and cloned our prod environment, including VPN and security rules, over to a commodity hosting provider.
Change the DNS entry for the services, and we were good to go. We didn't need to do anything because everyone was freaking out about everything else being down. But our internal services were close to unaffected.
At least for my team.
Obviously, we aren't Trello or some of the other big people affected. And we don't have the same needs they do. But setting up the DevOps stuff for my team in the way that I think was correct to begin with (no dependencies other than a Debian box) really shined today. Having a clear and correct deployment strategy on any available hardware platform really worked for us.
Or at least it would have if people weren't so upset about all our other external services being down that they paid no attention to internal services.
Lock-in is bad, mmkay?
If your company is the right size, and it makes sense, do the extra work. It's not that hard to write agnostic scripts that deploy your software, create your database, and build your data from a backup. This can be a big deal when some providers are flipping out.
All-your-junk-in-one-place is really overrated, in my opinion. Be able to rebuild your code and your data at any given point in time. If you don't have that, I don't really know what you have.
By ianamartin 8 years ago
We're in US-West-2 and our ELBs are dropping 5XXs like there's no tomorrow. This is definitely cascading.
By vegasje 8 years ago
Canvas (the educational software platform) is down, and my friends/students are in bad shape now. 'sso.canvaslms.com' returns 504, assume from this S3 outage.
By huac 8 years ago
[deleted]
By 8 years ago
Anyone want to share their real experience with their reliability of Google Cloud Storage.
By etse 8 years ago
Down in US-East-1 as of 17:40 GMT. Amazon SES also down in US-East-1 as of a few minutes later.
Hearing reports of EBS down as well.
By scrollaway 8 years ago
The status page shows a lot of yellow and red now.
From http://status.aws.amazon.com/ Update at 11:35 AM PST: We have now repaired the ability to update the service health dashboard. The service updates are below. We continue to experience high error rates with S3 in US-EAST-1, which is impacting various AWS services. We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue.
By oshoma 8 years ago
You think this is bad? Just look at what's happening in Sweden...
By FussBudget86 8 years ago
Okay, it's been a few hours and this is starting to get ridiculous. When was the last time that we had a core infrastructure outage this major, that lasted for this long?
By Fej 8 years ago
It really is amazing how many web services are dependent on S3. For instance, the Heroku dashboard is currently down for me. Along with all of my services that are on Heroku.
By kevindong 8 years ago
I am having trouble sending attachments in the Signal app - seems unlikely, but could this be related?
We got timeouts to our bucket address from every location we tried starting at 10:37 Mountain time (GMT-7). Slack uploads started failing, imgur isn't working, and the landing page for the AWS console is showing a 500 error in the image flipper in the middle of the page. The Amazon status page has been all green, but there is a forum post about people having problems at https://forums.aws.amazon.com/thread.jspa?threadID=250319&ts...
In the last couple of minutes that forum post has gone from not existing to 175 views and 9 posts.
Amazon Elastic Compute Cloud (N. Virginia) Increased Error Rates less
11:38 AM PST We can confirm increased error rates for the EC2 and EBS APIs and failures for launches of new EC2 instances in the US-EAST-1 Region. We are also experiencing degraded performance of some EBS Volumes in the Region.
Amazon Elastic Load Balancing (N. Virginia) Increased Error Rates more
Amazon Relational Database Service (N. Virginia) Increased Error Rates more
Amazon Simple Storage Service (US Standard) Increased Error Rates more
Auto Scaling (N. Virginia) Increased Error Rates more
AWS Lambda (N. Virginia) Increased Error Rates more
By rabidonrails 8 years ago
According to the personal health dashboard, they've root-caused the S3 outage and are working to restore.
In the meantime, EC2, ELB, RDS, Lambda, and autoscaling have all been confirmed to be experiencing issues.
By joatmon-snoo 8 years ago
Meanwhile engineers across the globe scramble to fix outages due to AWS s3, $AMZN is unaffected on the stock market. Just shows the disconnect between emotions and reality.
I was listening to sessions from AWS Re:invent last night. What jumped out at me was the claim of 11 9's for S3. How many of those 9's have they blown through with this outage?
Experiencing issues with Elastic Beanstalk and Cloudfront as well.
By dyeje 8 years ago
I can't download purchased MP3's from amazon's own site, I get "We’re experiencing a problem with your music download. Please try downloading from Your Orders or contact us."
When I go to my orders I get "There's a problem displaying some of your orders right now.
If you don't see the order you're looking for, try refreshing this page, or click "View order details" for that order."
It seems that Amazon is eating its own dog food.
By jasonl99 8 years ago
I just spent the last hour trying to figure out why in the hell I can't update the function code on a lambda instance. Next time I will remember to check HN first!
By splatcollision 8 years ago
Omg I wish I googled this earlier. Wasted hours debugging :(
By machinarium 8 years ago
There goes my Trello to do list. Now I'm lost. Oh well.
By spacecadets 8 years ago
My ELBS and EB related instances are also down. I can't even get to Elastic Beanstalk or Load Balancers in the web console. Anyone else having this issue?
By ryanmarr 8 years ago
It doesn't look that bad, think about it S3 is such a critical part of almost any web application, it is treated like a realtime micro-service. So looks like most of the Internet in the U.S. is affected but nevertheless no one is dead yet and the world has not ended. So even if hypothetically let's say China attacked us using cyber-warfare it wouldn't be so bad after all... This was kind of like a test.
By soheil 8 years ago
I think this explains why the docker registry is down as well.
"Update at 11:35 AM PST: We have now repaired the ability to update the service health dashboard. The service updates are below. We continue to experience high error rates with S3 in US-EAST-1, which is impacting various AWS services. We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue."
One of my heroku apps is down, and I cant' log into the heroku dashboard to check it out. I'm guessing this is related.
By learc83 8 years ago
Yes it's down for me. I can't access files stored on S3. Also, the service I run is hung trying to store files on S3.
By vinayan3 8 years ago
My company's ELBs in us-east-1 are experiencing massive amounts of latency causing the instances to be marked unhealthy.
By JBerryMedX 8 years ago
FreshDesk makes extensive use of S3 and it's been unbearably slow to load for the past hour or so. All on S3 requests.
By leesalminen 8 years ago
Hate to ask, but does anybody now of an alternative storage solution? Also, anyone have any alternative to Heroku for now?
By rajangdavis 8 years ago
We're down too with www.paymoapp.com - pretty frustrated that the status page shows everything is up and running.
By janlukacs 8 years ago
This is truly serverless computing at work.
By poofyleek 8 years ago
After few requests timed out, started to dig a bit.
The CNAME for a bucket endpoint was pointing to s3-1-w.amazonaws.com with a TTL of at least an other 5600 secods.
Doing a full trace was giving back a new s3-3-w.amazonaws.com
The IP related to s3-1-w was/is timing out, all cool instead for the s3-3-w.
By xtus 8 years ago
"We’re continuing to work to remediate the availability issues for Amazon S3 in US-EAST-1. AWS services and customer applications depending on S3 will continue to experience high error rates as we are actively working to remediate the errors in Amazon S3." Last Update 1:54pmEST
It shows up in the event log now too.
By knaik94 8 years ago
I'm running into timeouts trying to download elixir packages, and I'm willing to bet this is the cause
Same here. I can log in to the new S3 console UI, but all of my buckets/resources are missing. Same error as you in the old UI. Also unable to connect through the AWS CLI (says, "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied"). Fun.
all of your jokes about the dashboard not turning red b/c the icon is hosted on US EAST are true:
Amazon Web ServicesVerified account @awscloud 8m8 minutes ago
More
The dashboard not changing color is related to S3 issue. See the banner at the top of the dashboard for updates.
By eggie5 8 years ago
AWS is claiming that Simple Storage (US Standard) is starting to come back up as of 12:54 PM PST.
By Animats 8 years ago
Where is that "Show HN" that will let me check if a site is affected by an S3 outtage?
By adamveld12 8 years ago
Their status page images are hosted on S3, so will be a while for the green checkmarks to update
By pfela 8 years ago
Look like the dashboard has been updated to no longer use S3:
AWS is having a major meltdown right now
"We have now repaired the ability to update the service health dashboard. " - full of yellow red icons now indeed https://status.aws.amazon.com/
By tudorconstantin 8 years ago
The AWS status page is still showing all green but how has a header saying they are investigating increased error rates. https://status.aws.amazon.com/
By linsomniac 8 years ago
It appears Docker Hub is hosted on S3 as well, none of the official images can be pulled.
By tzaman 8 years ago
I have in the middle of thoughts of moving out of AWS and having a dedicated provider as our billing has increased a lot with the scale. The only thing which was holding me was the uptime confidence. Now I feel it's not a bad idea.
By ruchit47 8 years ago
I get this in my aws console.
Increased API Error Rates
09:52 AM PST We are investigating increased error rates in the US-EAST-1 Region.
Event data
Event
S3 operational issue
Status
Open
Region/AZ
us-east-1
Start time
February 28, 2017 at 6:51:57 PM UTC+1
End time
-
Event category
Issue
Based on reports from the field, it looks like S3 was down for about three hours for most of their customers.
S3 promises four nines of availability (11 nines of durability), so today we got about 3-4 years worth of downtime in one fell swoop. Oops.
By metafunctor 8 years ago
We are starting to see recoveries, our SES emails have mostly gone out and our data synchronization has updated 2 of our 3 feeds. Amazon has posted a message that they expect "improved error rates" in the next 45 minutes.
Does anyone have trouble with the Cloud Console? The JS assets for the CloudFront dashboards seem broken, so unfortunately it’s not possible to change the behaviours of the Distributions (e.g. to point them to another bucket)
By jotaen 8 years ago
So much has broken thanks to this. Web apps, slack uploads, parts of Freshdesk etc. I don't love you right now AWS.
I never understood why so many devs flocked to AWS. I actually find their abstraction of services gets in the way and slows down my dev instead of making it easier like so many devs claim it does. I prefer Linode.
By fjabre 8 years ago
One of a really rare times when it's good to be in Europe (s3 works here).
By samat 8 years ago
Interestingly, I placed an order on amazon.com and while the order appears when I look at my account, none of the usual automated emails have come. I wonder how deeply this is effecting their retail customers.
By tjpaudio 8 years ago
[deleted]
By 8 years ago
Down from the outside;
The internal access (from within EC2) APIs still work.
All of our S3 assets are unavailable. Cloudfront is accessible but returning a 504 status with the message: "CloudFront is currently experiencing problems with requesting objects from Amazon S3."
By jhaile 8 years ago
Our IBM cloud- Softlayer provides secure and stable cloud environment with private network, for baremetal,dedicated, private and public cloud. Leave a comment if you want to learn more. also HIPAA ready.
By ibmcloud 8 years ago
Here we go again:
Technology leads to technology (and wealth) monopolies, in other words: more centralization. Which has always been bad.
Just like with Cloudflare leaking highly sensitive data all over the Internet, a couple of days ago.
By benevol 8 years ago
Yeah, we host on S3 (US-East-1 I think) with Cloudfront for caching / SSL. Some of our requests get through but it's been intermittent. Lots of 504 Gateway Time-Outs when retrieving CSS, JS.
By andrewfong 8 years ago
[deleted]
By 8 years ago
Totally fucked.
By meddlepal 8 years ago
I think there was some fontawesome loading issues related to this, I also noticed a site trying to load twitter messages but couldn't Get the JavaScript loaded during that time today.
My EB instances and Load Balancers are also down. I can't even get to load balancers in ec2 web console or to elastic beanstalk in web console. It's been almost an hour now.
By ryanmarr 8 years ago
As of 4:30PM Pacific, we're still having trouble with EC2 autoscaling API operations in US-East-1. Basically very long delays in launching new instances or terminating old ones.
DockerHub is down as well. DockerHub was down in Oct 2015 because S3 was down in US-EAST. They should have known to cache images in multiple S3 regions since then.
By garindra 8 years ago
Can anyone comment on mitigating issues like this with S3 Cross-region replication? I'm reading up on it now while one of my services is dead in the water.
wow even services like Intercom are affected, I can't see who is on my website right now.
By soheil 8 years ago
yeah still all green in AWS status.... maybe their red and yellow icons are kept on S3. :-)))
By sweddle 8 years ago
Now what kind of business choose to remain down for 2 hours plus during the peak business hours?
Seems cloud computing still has a lot to learn.
By sk2code 8 years ago
[deleted]
By 8 years ago
Is this only affecting US-EAST-1?
By rhelsing 8 years ago
eu-west-1 is doing great. Obviously European ops are superior to their US counterparts.
By nvarsj 8 years ago
Same here. We are seeing issues.
By bseabra 8 years ago
Same here. US East (N. Virginia)
By ARolek 8 years ago
We all laughed at the notion om moon people dropping rocks at the earth.
Then they started dropped rocks on S3 and who is laughing now?
By mvindahl 8 years ago
in the s3 web interface requests to S3 backend end with 503 Service Unavailable
By cryreduce 8 years ago
Works for me, in us-west-2.
By Beacon11 8 years ago
Same here
By the_arun 8 years ago
is it just us-east-1? could it be prevented by using a different region?
By thepumpkin1979 8 years ago
[deleted]
By 8 years ago
Dead as a doornail for me
By 65827 8 years ago
Yea seeing the same thing
By danielmorozoff 8 years ago
news.ycombinator.com seems really slow right now. s3 dependencies?
By SubiculumCode 8 years ago
[deleted]
By 8 years ago
My website is not down.
By Eyes 8 years ago
Yup - dead in the water
By jgacook 8 years ago
Seeing it here as well.
By baconomatic 8 years ago
Cmon but the cloud is magic and very reliable let's move everything to the cloud
By qaq 8 years ago
quite ironic that 'isitdown.com' is also down
By mrep 8 years ago
Netflix is up. Enjoy
By AzzieElbab 8 years ago
Same here in US EAST
By Raphmedia 8 years ago
Seeing the same here
By xvolter 8 years ago
Outage as a Service
By julenx 8 years ago
The same here still
By ahmetcetin 8 years ago
Yes, appears to be.
By dbg31415 8 years ago
Same problem bro...
By TheVip 8 years ago
Quora is down too.
By prab97 8 years ago
Is it down again?
By sonnyhe2002 8 years ago
It's down :(
By jsanroman 8 years ago
SES is also down
By 0xCMP 8 years ago
[deleted]
By 8 years ago
Yep, same here.
By mtdewulf 8 years ago
is this affecting dockerhub for anyone?
By eggie5 8 years ago
same here, east us seems non-responsive
By jahrichie 8 years ago
The same here
By ahmetcetin 8 years ago
Down for me.
By methurston 8 years ago
any one get more info from AWS?
By kangman 8 years ago
what's the SLA for s3?
By kangman 8 years ago
Yes
By aarondf 8 years ago
Same
By davidsawyer 8 years ago
Yes.
By GabeIsman 8 years ago
region-west2 is also down
By dhairya 8 years ago
heroku API is down for me
By thadjo 8 years ago
So why did the outage occur?
By kfkhalili 8 years ago
Azure is also down. Related?
By davidcollantes 8 years ago
getting the same...
By renzy 8 years ago
yes, confirmed.
By eggie5 8 years ago
yes it is.
By simook 8 years ago
yup
By b01t 8 years ago
[deleted]
By 8 years ago
Increased Error Rates
Update at 11:35 AM PST: We have now repaired the ability to
update the service health dashboard. The service updates
are below. We continue to experience high error rates with
S3 in US-EAST-1, which is impacting various AWS services.
We are working hard at repairing S3, believe we understand
root cause, and are working on implementing what we believe
will remediate the issue.
Amazon hosted their status page on their failing service, ouch. Now they fixed the status page, after more than one hour.
The dashboard not changing color is related to S3 issue.
See the banner at the top of the dashboard for updates.
So this is particularly weird - one of my instances was showing 0% CPU in CloudWatch (dropped from 60% at the start of the event), but the logs were saying 'load 500'. I ssh'd in... and the problem resolved itself. The only thing I did was run htop to look at the load, and it dropped from 500 (reported in htop) to it's normal level. Just ssh'ing in fixed that issue.
By vacri 8 years ago
Sorry, my simplistic mind is only thinking this right now:
Getting Issues with Citrix Sharefile api (which I've suspected to run in S3). Seems to only be impacting writes in preliminary assessment.
By jefe_ 8 years ago
[deleted]
By 8 years ago
[deleted]
By 8 years ago
[deleted]
By 8 years ago
Is cli working for anyone else? I can't use the console UI, but aws s3 ls and get commands seem to be working fine.
By 4wmturner 8 years ago
Looks like SoundCloud is hosting the tracks on S3 , can't program without my music...
By myth_drannon 8 years ago
[deleted]
By 8 years ago
Well this took out Quay and CircleCI! Hopefully this gets resolved ASAP.
By chadscira 8 years ago
EVERYBODY PANIC! US-EAST-1 is what we use, down for us.
By nanistheonlyist 8 years ago
Dropbox is down as well. This is going to be gud.
By sz4kerto 8 years ago
[deleted]
By 8 years ago
Is it down again?
By sonnyhe2002 8 years ago
soundcloud uses aws s3. it is still down.
By thenewregiment2 8 years ago
back up
By la6470 8 years ago
are you openly admitting that the AWS service status page runs on AWS? because that is far more embarrassing than this downtime ever could be
By fletom 8 years ago
Mass outage like this is exactly one of the things we are looking to avoid by building a decentralized storage grid with Sia.
Sia are immune to situations like this because data is stored redundantly across dozens of servers around the world that are all running on different, unique configurations. Furthermore, there's no single central point of control on the Sia network.
Sia is still under heavy development, but it's future featureset and specifications should be able to fully replace the S3 service (including CDN capabilities).
I like how you know this comment is in poor taste, and posted it anyways.
By ocdtrekkie 8 years ago
fuck the police
By tommy1212 8 years ago
I try not to put all my eggs in one basket, that's why for images I use imgur. They have a great API and it's 100% free. There is a handy ruby gem [1] which takes a user uploaded image and sticks it on imgur and returns its URL with dimensions etc. On top of that you don't have to pay for traffic to those assets.
By boulos 8 years ago
By NiekvdMaas 8 years ago
By JPKab 8 years ago
By twakefield 8 years ago
By blantonl 8 years ago
By sachinag 8 years ago
By nodesocket 8 years ago
By i336_ 8 years ago
By ptrptr 8 years ago
By scrollaway 8 years ago
By simonebrunozzi 8 years ago
By joshontheweb 8 years ago
By andmarios 8 years ago
By rynop 8 years ago
By Spunkie 8 years ago
By malloryerik 8 years ago
By zoloateff 8 years ago
By DenisM 8 years ago
By danielvf 8 years ago
By sandGorgon 8 years ago
By mbrumlow 8 years ago
By gaul 8 years ago
By thejosh 8 years ago
By philliphaydon 8 years ago
By hyperpallium 8 years ago
By espeed 8 years ago
By pmarreck 8 years ago
By masterleep 8 years ago
By cperciva 8 years ago
By josephb 8 years ago
By kaishiro 8 years ago
By thenewregiment2 8 years ago
By gamache 8 years ago
By traskjd 8 years ago
By xbryanx 8 years ago
By movedx 8 years ago
By bischofs 8 years ago
By twistedpair 8 years ago
By compuguy 8 years ago
By jchmbrln 8 years ago
By nola-radar 8 years ago
By notheguyouthink 8 years ago
By shirleman 8 years ago
By nola-radar 8 years ago
By alexleclair 8 years ago
By nostromo 8 years ago
By emrekzd 8 years ago
By tuna-piano 8 years ago
By tlogan 8 years ago
By hartleybrody 8 years ago
By matwood 8 years ago
By jonstaab 8 years ago
By Fiahil 8 years ago
By cyberferret 8 years ago
By gmisra 8 years ago
By geerlingguy 8 years ago
By jliptzin 8 years ago
By ethanpil 8 years ago
By maxerickson 8 years ago
By dang 8 years ago
By greenhathacker 8 years ago
By chrisan 8 years ago
By johngalt 8 years ago
By jrs235 8 years ago
By atombender 8 years ago
By fletom 8 years ago
By STRML 8 years ago
By rrggrr 8 years ago
By mabramo 8 years ago
By rnhmjoj 8 years ago
By benwilber0 8 years ago
By bandrami 8 years ago
By obeattie 8 years ago
By AndyKelley 8 years ago
By 140am 8 years ago
By mijustin 8 years ago
By ethanpil 8 years ago
By valine 8 years ago
By dfischer 8 years ago
By homakov 8 years ago
By rawrmaan 8 years ago
By c4urself 8 years ago
By DenisM 8 years ago
By bandrami 8 years ago
By remx 8 years ago
By devy 8 years ago
By caravel 8 years ago
By agotterer 8 years ago
By robineyre 8 years ago
By flavor8 8 years ago
By samaysharma 8 years ago
By redm 8 years ago
By talawahdotnet 8 years ago
By gaia 8 years ago
By vpeters25 8 years ago
By verelo 8 years ago
By jedicoder107 8 years ago
By malchow 8 years ago
By Animats 8 years ago
By ianamartin 8 years ago
By vegasje 8 years ago
By huac 8 years ago
By 8 years ago
By etse 8 years ago
By scrollaway 8 years ago
By oshoma 8 years ago
By FussBudget86 8 years ago
By Fej 8 years ago
By kevindong 8 years ago
By jpwgarrison 8 years ago
By ganesharul 8 years ago
By booleandilemma 8 years ago
By BlackjackCF 8 years ago
By leesalminen 8 years ago
By koolba 8 years ago
By mixedbit 8 years ago
By Globz 8 years ago
By newsat13 8 years ago
By Animats 8 years ago
By l0c0b0x 8 years ago
By dangle 8 years ago
By linsomniac 8 years ago
By ayemeng 8 years ago
By rabidonrails 8 years ago
By joatmon-snoo 8 years ago
By nodesocket 8 years ago
By bdcravens 8 years ago
By trakl 8 years ago
By dyeje 8 years ago
By jasonl99 8 years ago
By splatcollision 8 years ago
By machinarium 8 years ago
By spacecadets 8 years ago
By ryanmarr 8 years ago
By soheil 8 years ago
By khamoud 8 years ago
By sc30317 8 years ago
By Animats 8 years ago
By robxu9 8 years ago
By artur_makly 8 years ago
By fernandopj 8 years ago
By learc83 8 years ago
By vinayan3 8 years ago
By JBerryMedX 8 years ago
By leesalminen 8 years ago
By rajangdavis 8 years ago
By janlukacs 8 years ago
By poofyleek 8 years ago
By xtus 8 years ago
By knaik94 8 years ago
By samgranieri 8 years ago
By melor 8 years ago
By mmansoor78 8 years ago
By axg 8 years ago
By socialentp 8 years ago
By gopalakrishnans 8 years ago
By willcodeforfoo 8 years ago
By devenrl 8 years ago
By eggie5 8 years ago
By Animats 8 years ago
By adamveld12 8 years ago
By pfela 8 years ago
By cdnsteve 8 years ago
By tudorconstantin 8 years ago
By linsomniac 8 years ago
By tzaman 8 years ago
By ruchit47 8 years ago
By jontro 8 years ago
By manmal 8 years ago
By metafunctor 8 years ago
By linsomniac 8 years ago
By nlightcho 8 years ago
By tbeutel 8 years ago
By francesco1975 8 years ago
By phildougherty 8 years ago
By jotaen 8 years ago
By mmaunder 8 years ago
By Exuma 8 years ago
By rrecuero 8 years ago
By fjabre 8 years ago
By samat 8 years ago
By tjpaudio 8 years ago
By 8 years ago
By pmalynin 8 years ago
By awsoutage 8 years ago
By jhaile 8 years ago
By ibmcloud 8 years ago
By benevol 8 years ago
By andrewfong 8 years ago
By 8 years ago
By meddlepal 8 years ago
By cdevs 8 years ago
By mpetrovich 8 years ago
By ryanmarr 8 years ago
By all_usernames 8 years ago
By AtheistOfFail 8 years ago
By notheguyouthink 8 years ago
By krlkv 8 years ago
By netvisao 8 years ago
By ttttytjj 8 years ago
By garindra 8 years ago
By zedpm 8 years ago
By beeftime 8 years ago
By rebornix 8 years ago
By zitterbewegung 8 years ago
By 8 years ago
By bkruse 8 years ago
By shifted316 8 years ago
By contingencies 8 years ago
By jefe_ 8 years ago
By 8 years ago
By LeonM 8 years ago
By BrandonM 8 years ago
By philliphaydon 8 years ago
By 8 years ago
By afshinmeh 8 years ago
By 8 years ago
By newman314 8 years ago
By bkanber 8 years ago
By vanpupi 8 years ago
By gcoguiec 8 years ago
By c4urself 8 years ago
By stevefram 8 years ago
By alfg 8 years ago
By 4wmturner 8 years ago
By travelton 8 years ago
By kyleblarson 8 years ago
By francesco1975 8 years ago
By Globz 8 years ago
By cwe 8 years ago
By bas 8 years ago
By ignaces 8 years ago
By Svenskunganka 8 years ago
By indytechcook 8 years ago
By dgelks 8 years ago
By maccard 8 years ago
By headcanon 8 years ago
By djb_hackernews 8 years ago
By mcheshier 8 years ago
By kardashev 8 years ago
By shiven 8 years ago
By dageshi 8 years ago
By 8 years ago
By skiril 8 years ago
By murphy52 8 years ago
By k__ 8 years ago
By balls187 8 years ago
By happyrock 8 years ago
By myth_drannon 8 years ago
By JustinAiken 8 years ago
By thomassharoon 8 years ago
By KurtMueller 8 years ago
By zerotolerance 8 years ago
By nicpottier 8 years ago
By mwambua 8 years ago
By jsperson 8 years ago
By magic_beans 8 years ago
By booleanbetrayal 8 years ago
By mystcb 8 years ago
By ondrae 8 years ago
By amcrouch 8 years ago
By tech4all 8 years ago
By edcoffin 8 years ago
By oaktowner 8 years ago
By Exuma 8 years ago
By DocK 8 years ago
By Rockastansky 8 years ago
By 8 years ago
By murphy52 8 years ago
By tomharrisonjr 8 years ago
By reiichiroh 8 years ago
By framebit 8 years ago
By austinkurpuis 8 years ago
By Rockastansky 8 years ago
By kolemcrae 8 years ago
By exodos 8 years ago
By twiss 8 years ago
By j_shi 8 years ago
By gtrubetskoy 8 years ago
By simplehuman 8 years ago
By hyperanthony 8 years ago
By jacobevelyn 8 years ago
By afshinmeh 8 years ago
By dorianm 8 years ago
By dorianm 8 years ago
By ajmarsh 8 years ago
By edgartaor 8 years ago
By Rapzid 8 years ago
By the_arun 8 years ago
By oneeyedpigeon 8 years ago
By rbirkby 8 years ago
By paulddraper 8 years ago
By carimura 8 years ago
By orn 8 years ago
By ianopolous 8 years ago
By sz4kerto 8 years ago
By manshoor 8 years ago
By nomadic_09 8 years ago
By aytekin 8 years ago
By oculusthrift 8 years ago
By grimmdude 8 years ago
By chiph 8 years ago
By nickstefan12 8 years ago
By chx 8 years ago
By 8 years ago
By kopy 8 years ago
By Trisell 8 years ago
By josephlord 8 years ago
By vanpupi 8 years ago
By _callcc 8 years ago
By outericky 8 years ago
By jflowers45 8 years ago
By uranian 8 years ago
By magic_beans 8 years ago
By nicpottier 8 years ago
By nicpottier 8 years ago
By nodefortytwo 8 years ago
By rhelsing 8 years ago
By SubiculumCode 8 years ago
By aabajian 8 years ago
By soheil 8 years ago
By sweddle 8 years ago
By sk2code 8 years ago
By 8 years ago
By rhelsing 8 years ago
By nvarsj 8 years ago
By bseabra 8 years ago
By ARolek 8 years ago
By mvindahl 8 years ago
By cryreduce 8 years ago
By Beacon11 8 years ago
By the_arun 8 years ago
By thepumpkin1979 8 years ago
By 8 years ago
By 65827 8 years ago
By danielmorozoff 8 years ago
By SubiculumCode 8 years ago
By 8 years ago
By Eyes 8 years ago
By jgacook 8 years ago
By baconomatic 8 years ago
By qaq 8 years ago
By mrep 8 years ago
By AzzieElbab 8 years ago
By Raphmedia 8 years ago
By xvolter 8 years ago
By julenx 8 years ago
By ahmetcetin 8 years ago
By dbg31415 8 years ago
By TheVip 8 years ago
By prab97 8 years ago
By sonnyhe2002 8 years ago
By jsanroman 8 years ago
By 0xCMP 8 years ago
By 8 years ago
By mtdewulf 8 years ago
By eggie5 8 years ago
By jahrichie 8 years ago
By ahmetcetin 8 years ago
By methurston 8 years ago
By kangman 8 years ago
By kangman 8 years ago
By aarondf 8 years ago
By davidsawyer 8 years ago
By GabeIsman 8 years ago
By dhairya 8 years ago
By thadjo 8 years ago
By kfkhalili 8 years ago
By davidcollantes 8 years ago
By renzy 8 years ago
By eggie5 8 years ago
By simook 8 years ago
By b01t 8 years ago
By 8 years ago
By frik 8 years ago
By vacri 8 years ago
By devenrl 8 years ago
By joatmon-snoo 8 years ago
By joatmon-snoo 8 years ago
By 8 years ago
By 8 years ago
By skryshtafovych 8 years ago
By jefe_ 8 years ago
By 8 years ago
By 8 years ago
By 8 years ago
By 4wmturner 8 years ago
By myth_drannon 8 years ago
By 8 years ago
By chadscira 8 years ago
By nanistheonlyist 8 years ago
By sz4kerto 8 years ago
By 8 years ago
By sonnyhe2002 8 years ago
By thenewregiment2 8 years ago
By la6470 8 years ago
By fletom 8 years ago
By Taek 8 years ago
By ocdtrekkie 8 years ago
By tommy1212 8 years ago
By soheil 8 years ago