thesubtleartofe-triage内容摘要:

f users rises, and/or  Size of work rises, and/or  Load on the system increases, and/or  Amounts of data it manages increase Scalability is Hard!  In small scale settings  Conditions are easily controlled  We don’t tend to see failures and recoveries  Things that can fail include puters and software on them, work links, routers…  We are not likely to e under attack Fundamental Issues of Scale  Suppose a machine can do x business transactions per second  If I double the load to 2x how big and fast a machine should I buy?  With puters, answer isn’t obvious!  If the answer is “twice as big” we say the problem scales “linearly”.  Often the answer is “4 times as big” or worse! Such problems scale poorly – perhaps even exponentially!  Basic insight: “bigger” is often much harder! Does the Inter “Scale”?  It works pretty well, most of the time  But if you look closely, it has outages very frequently  Butler Lampson won the Turing Award  (to paraphrase): Computer scientists didn’t invent the worldwideweb because they are only interested in building things that work really well. The Web, of course, is notoriously unreliable. But the insight we, as puter scientists, often miss is that for the Web doesn’t need to work well!  A “reliable web” – an example of an oxymoron?  Inter scales but has low reliability How do technologies scale?  One of the most critical issues we face!  The bottom line is that, on the whole  Very few technologies scale well  The ones that do tend to have poor reliability and security properties  Scale introduces major forms of plexity  And large systems tend to be unstable, hard to administer, and “fragile” under stress Web scaling issues  A very serious problem for popular sites  Most solutions work like this:  Your site “offloads” data to a web hosting pany  Example is Akamai, or Exodus  They replicate your pages at many sites worldwide  Ideally, your customers see better performance  Second approach: Digital Island  They focus on giving better connections to the Inter backbone, avoiding ISP congestion… Akamai Approach  They cut deals with lots of ISPs  Give us room in your machine room  In fact, you should pay us for this!  We’ll put our server there  And it will handle so much web traffic…  That your lines will be less loaded, since nothing will need to go out to the backbone  And this will save you big bucks! A Good Idea?  Akamai approach focuses on “rarely changing” data  Example: pictures used on your web pages  Nonexample: the pages themselves, which are often constructed in a customized way  PreAkamai: Your web site handles all the traffic for constructing pages and also handing out the pictures and static stuff  PostAkamai: You hand out the main pages but the URLs for pictures point to Akamai web servers Pre and PostAkamai  PreAkamai, the pages fetched by the browser are a mass of URLs  And these point to things like pictures and ads stored at  So to open a page, the user  Sends a request  Fetches an index page  Then。
阅读剩余 0%
本站所有文章资讯、展示的图片素材等内容均为注册用户上传(部分报媒/平媒内容转载自网络合作媒体),仅供学习参考。 用户通过本站上传、发布的任何内容的知识产权归属用户或原始著作权人所有。如有侵犯您的版权,请联系我们反馈本站将在三个工作日内改正。