Architecture of a Web server accelerator

Cited 7 time in webofscience Cited 12 time in scopus
  • Hit : 381
  • Download : 2
DC FieldValueLanguage
dc.contributor.authorSong, Junehwako
dc.contributor.authorIyengar, Ako
dc.contributor.authorLevy-Abegnoli, Eko
dc.contributor.authorDias, Dko
dc.date.accessioned2009-11-27T06:57:00Z-
dc.date.available2009-11-27T06:57:00Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2002-01-
dc.identifier.citationCOMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING, v.38, no.1, pp.75 - 97-
dc.identifier.issn1389-1286-
dc.identifier.urihttp://hdl.handle.net/10203/13580-
dc.description.abstractWe describe the design, implementation and performance of a high-performance Web server accelerator which runs on an embedded operating system and improves Web server performance by caching data. It can serve Web data at rates an order of magnitude higher than that which would be achieved by a high-performance Web server running on similar hardware under a conventional operating system such as Unix or NT. The superior performance of our system results in part from its highly optimized communications stack. In order to maximize hit rates and maintain updated caches, our accelerator provides an API which allows application programs to explicitly add, delete, and update cached data. The API allows our accelerator to cache dynamic as well as static data. We describe how our accelerator can be scaled to multiple processors to increase performance and availability. The basic design alternatives include a content router or a TCP router (without content routing) in front of a set of Web cache accelerator nodes, with the cache memory distributed across the accelerator nodes. Content-based routing reduces cache node CPU cycles but can make the front-end router a bottleneck. With the TCP router, a request for a cached object may initially be sent to the wrong cache node; this results in larger cache node CPU cycles, but can provide a higher aggregate throughput, because the TCP router becomes a bottleneck at a higher throughput than the content router. We quantify the throughput ranges in which different designs are preferable. We also examine a combination of content-based and TCP routing techniques. In addition, we present statistics from critical deployments of our accelerator for improving performance at highly accessed Sporting and Event Web sites hosted by IBM. (C) 2002 Elsevier Science B.V. All rights reserved.-
dc.languageEnglish-
dc.language.isoen_USen
dc.publisherELSEVIER SCIENCE BV-
dc.titleArchitecture of a Web server accelerator-
dc.typeArticle-
dc.identifier.wosid000173165400005-
dc.identifier.scopusid2-s2.0-0037107716-
dc.type.rimsART-
dc.citation.volume38-
dc.citation.issue1-
dc.citation.beginningpage75-
dc.citation.endingpage97-
dc.citation.publicationnameCOMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING-
dc.identifier.doi10.1016/S1389-1286(01)00241-9-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorSong, Junehwa-
dc.contributor.nonIdAuthorIyengar, A-
dc.contributor.nonIdAuthorLevy-Abegnoli, E-
dc.contributor.nonIdAuthorDias, D-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorWeb server acceleration-
dc.subject.keywordAuthorreverse proxy caches-
dc.subject.keywordAuthorWeb caching-
dc.subject.keywordAuthorWeb performance-
dc.subject.keywordAuthorcontent-based routing-
dc.subject.keywordAuthorload balancing-
dc.subject.keywordAuthorTCP routing-
dc.subject.keywordAuthorconnection hand-off-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 7 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0