As people increasingly rely on the internet for business, personal finance, and investment, internet fraud becomes a greater and greater threat. Internet fraud takes many forms, from phony items offered for sale on eBay, to scurrilous rumors that manipulate stock prices, to scams that promise great riches if you will help a foreign financial transaction through your own bank account.
One interesting and fast-growing species of Internet fraud is phishing. Phishing attacks use email messages and web sites designed to look as if they come from a known and legitimate organization, in order to deceive users into disclosing personal, financial, or computer account information. The attacker can then use this information for criminal purposes, such as identity theft, larceny, or fraud. Users are tricked into disclosing their information either by providing it through a web form or by downloading and installing hostile software.
A phishing attack succeeds when a user is tricked into forming an inaccurate mental model of an online interaction and thus takes actions that have effects contrary to the user's intentions. Because inferring a user's intentions can be difficult, building an automated system to protect users from phishing attacks is a challenging problem.
Phishing attacks are rapidly increasing in frequency; many are good enough to fool users. According to the Anti-Phishing Working Group (APWG),  reports of phishing attacks increased by 180% in April 2004 alone, and by 4,000% in the six months prior to April. A recent study done by the antispam firm MailFrontier Inc. found that phishing emails fooled users 28% of the time.  Estimates of losses resulting from phishing approached $37 million in 2002. 
The Anti-Phishing Working Group collects and archives examples of phishing attacks, a valuable service because the web site used in an attack exists only for a short time. One example on APWG is an attack against eBay customers, first reported on March 9, 2004. 
The attack begins when the potential victim receives an email (Figure 1), purporting to be from eBay, that claims that the user's account information is invalid and must be corrected. The email contains an embedded hyperlink that appears to point to a page on eBay's web site. This web page asks for the user's credit card number, contact information, Social Security number, and eBay username and password (Figure 2).
Beneath the surface, however, neither the email message nor the web page is what it appears to be. Figure 3 breaks the deception down schematically. The phishing email resembles a legitimate email from eBay. Its source (listed in the "From:" header) appears to be S-Harbor@eBay.com, which refers to the legitimate domain name for eBay Inc. The link embedded in the message also appears to go to eBay.com, even using an encrypted channel ("https:"). Based on these presentation cues and the content of the message, the user forms a mental model of the message: eBay is requesting updated information. The user then performs an action, clicking on the embedded hyperlink, which is presumed to go to eBay. But the user's action is translated into a completely different system operation—namely, retrieving a web page from IP address 184.108.40.206, a server from a communication company registered in Seoul, South Korea. This company has no relationship with eBay Inc.
The phishing web site follows a similar pattern of deception. The page looks like a legitimate eBay web page. It contains an eBay logo, and its content and layout match the format of pages from the actual eBay web site. Based on this presentation, the user forms a
Figure 1. Screenshot of a phishing email (source: Anti-Phishing Working Group)
mental model that the browser is showing the eBay web site and that the requested information must be provided in order to keep the user's eBay account active. The user then performs an action, typing in personal and financial data and clicking the Submit button, with the intention of sending this information to eBay. This action is translated by the web browser into a system operation, encoding the entered data into an HTTP request sent to 220.127.116.11, which is not a legitimate eBay server.
Bruce Schneier has observed that methods for attacking computer networks can be categorized in waves of increasing sophistication and abstraction. According to Schneier, the first wave of attacks was physical in nature, targeting the computers, the network devices, and the wires between them, in order to disrupt the flow of information. The second wave consisted of syntactic attacks, which target vulnerabilities in network protocols, encryption algorithms, or software implementations. Syntactic attacks have been a primary concern
Figure 2. Screenshot of a phishing web page pointed to by the phishing email (source: Anti-Phishing Working Group)
Figure 3. Anatomy of a phishing attack
of security research for the last decade. The third wave is semantic: "attacks that target the way we, as humans, assign meaning to content." 
Phishing is a semantic attack. Successful phishing depends on a discrepancy between the way a user perceives a communication, like an email message or a web page, and the actual effect of the communication. Figure 4 shows the structure of a typical Internet communication, dividing it into two parts. The system model is concerned with how computers exchange bits—protocols, representations, and software. When human users play a role in the communication, however, understanding and protecting the system model is not enough, because the real message communicated depends not on the bits exchanged but on the semantic meanings that are derived from the bits. This semantic layer is the user's mental model. The effectiveness of phishing indicates that human users do not always assign the proper semantic meaning to their online interactions.
Figure 4. Human-Internet communication
When a user faces a phishing attack, the user's mental model about the interaction disagrees with the system model. For example, the user's intention may be "go to eBay," but the actual implementation of the hyperlink may be "go to a server in South Korea." It is this discrepancy that enables the attack, and it is this discrepancy that makes phishing attacks very hard to defend against. Users derive their mental models of the interaction from the presentation of the interaction—the way it appears on the screen. The implementation details of web pages and email messages are hidden, and are generally inaccessible to most users. Thus, the user is in no position to compare his mental model with the system model, and it would take extra effort to do so. On the other hand, email clients and web browsers follow the coded instructions provided to them in the message, but are unable to check the user's intentions. Without awareness of both models, neither the user nor the computer is able to detect the discrepancy introduced by phishing.
One extreme solution to the phishing problem would simply discard the presentation part of an Internet communication—the part that produces the user's mental model—because it can't be trusted. Instead, a new presentation would be generated directly from the implementation. If the user's computer is trustworthy, then, the presentation seen by the user would be guaranteed to be related to the actual implementation. Unfortunately, the cost of this idea in both usability and functionality would be enormous. Most online messages are legitimate, after all, with the presentation correctly reflecting the implementation. Phishing messages are rare (but pernicious) exceptions. So this solution would improperly sacrifice the freedom of legitimate senders to present and brand themselves in order to block a small number of wrongdoers.
So we must accept the fact that users will see messages with mismatched presentation and implementation. Attempts to fight phishing computationally, which are discussed in this chapter, try to enable the computer to bridge the gap between the user's mental model and the true system model. But the human user must be the final decision-maker about whether a message is phishing. The reason is that phishing targets how users assign semantic meaning to their online interactions, and this assignment process is outside the system's control.
Phishing attacks use a variety of techniques to make the presentation of an email message or web page deceptively different from its implementation. In this section, we catalog a few of the techniques that have been seen in the wild:
Another way that users authenticate web sites is by examining the URL displayed in the address bar. To deceive this indicator, the attacker may register a domain name that bears a superficial similarity to the imitated site's domain. Sometimes a variation in capitalization or use of special characters is effective. Because most browsers display the URL in a sans-serif font, paypaI.com has been used to spoof paypal.com, and barcIays.com to spoof barclays.com. More commonly, however, the fake domain name simply embeds some part of the real domain: ebay-members-security.com to spoof ebay.com, and users-paypal.com to spoof paypal. Most users lack the tools and knowledge to investigate whether the fake domain name is really owned by the company being spoofed.
Another way to spoof the URL took advantage of a little-used feature in URL syntax. A username and password could be included before the domain name, using the syntax http://username:password@domain/. Attackers could put a reasonable-looking domain name in the username field, and obscure the real domain amid noise or scroll it past the end of the address bar (e.g., http://earthlink.net%6C%6C...%6C@18.104.22.168). Recent updates to web browsers have closed this loophole, either by removing the username and password from the URL before displaying it in the address bar or (in the case of Internet Explorer) by simply forbidding the username/password URL syntax entirely.
The simplest expedient to obscuring a server's identity is to display it as an IP address, such as http://22.214.171.124. This technique is surprisingly effective. Because many legitimate URLs are already filled with opaque and incomprehensible numbers, only a user knowledgeable enough to parse a URL, and alert enough to actually do so, is likely to be suspicious.
A recent attack against Citibank customers  has taken page copying a step further, by displaying the true Citibank web site in the browser but popping up an undecorated window on top to request the user's personal information.
Phishing attacks also use nontechnical approaches to persuade users to fall for the attack. One tactic is urgency so that the user will feel rushed to comply and be less likely to take time to check the message's authenticity. Another tactic is a threat of dire consequences if the user fails to comply, such as terminating service or closing accounts. A few attacks promise big rewards instead ("You've won a great prize!"), but threatening attacks are far more common. It may be human nature that users would be more suspicious of getting something for nothing.
Phishing attacks to date have several other noteworthy properties:
Most phishing web sites exist for a very short period of time, on the order of days or even hours.
Many phishing messages have misspellings, grammar errors, or confusing wording.
As we showed earlier in the example of the eBay attack, we can separate an online interaction into four steps (Figure 5):
Message retrieval. An email message or web page arrives at the user's personal computer from the Internet.
Presentation. The message is displayed in the user interface, the user perceives it, and the user forms a mental model.
Action. Guided by the mental model, the user performs an action in the user interface, such as clicking a link or filling in a form.
System operation. The user's action is translated into system operations, such as connecting to a web server and submitting data.
In this section, we survey existing defenses against phishing attacks, classifying them according to which of these four steps they address.
Figure 5. Four steps of human-Internet interaction
In an ideal world, the best defense against phishing would simply block all phishing communications from being shown to the user, by filtering them at message retrieval time. The essential requirement for this solution is that the computer alone must be able to accurately differentiate phishing messages from legitimate ones. Defenses that filter at message retrieval depend on message properties that are easily understood by a computer.
One of these properties is the identity of the sender. Black listing is widely used to block potentially dangerous or unwelcome messages, such as spam. If the sender's IP address is found in a black list, the incoming message can be categorized as spam or even simply rejected without informing the user. A black list may be managed by an individual user, the approach taken by Internet Explorer's Content Advisor (Figure 6). Alternatively, it may be managed by an organization or by collaboration among many users. For phishing, the EarthLink Toolbar alerts the user about web pages that are found on a black list of known fraudulent sites. 
Figure 6. Internet Explorer's Content Advisor
Black listing is unlikely to be an effective defense on today's Internet, because it is so easy to generate new identities such as new email addresses and new domain names. Even new IP addresses are cheap and easy to obtain. The black list must be updated constantly to warn users about dangerous messages from newly created sources. Because phishing sites exist for only a short time, the black list must be updated within hours or minutes in order to be effective at blocking the attack.
The converse of black listing is white listing, allowing users to see messages only from a list of acceptable sources. For example, Secure Browser controls where users may browse on the Internet using a list of permitted URLs.  White listing avoids the new-identity problem because newly created sources are initially marked as unacceptable. But defining the white list is a serious problem. Because it is impossible to predict where a user might need to browse, a predefined, fixed white list invariably blocks users from accessing legitimate web sites. On the other hand, a dynamic white list that needs the user's involvement puts a burden on users because, for every site they want to visit, they must first decide whether to put it in the white list. This also creates vulnerability: if a phishing site can convince users to submit sensitive data to it, it may also be able to convince them to put it into a white list.
Another property amenable to message filtering is the textual content of the message. This kind of content analysis is used widely in antispam and antivirus solutions. Dangerous messages are detected by searching for well-known patterns, such as spam keywords and virus code signatures. In order to beat content analysis, an attacker can tweak the content to bypass the well-known filtering rules. For example, encryption and compression are added to existing viruses in order to bypass antivirus scans.  Random characters are inserted into spam emails to enable them to bypass spam filters. One sophisticated phishing attack used images to display text messages so that they would defeat content analysis. 
Spam filtering is one defense that applies at message retrieval time. Because nearly all phishing attacks are currently launched by spam, getting spam under control may reduce the risk of phishing attacks significantly. Unfortunately, the techniques used by many spam filters, which scan for keywords in the message content to distinguish spam from legitimate mail, are insufficient for classifying phishing attacks, because phishing messages are designed expressly to mimic legitimate mail from organizations with which the user already has a relationship. Even if spam filters manage to reduce the spam problem substantially, we can anticipate that phishing will move to other transmission vectors, such as anonymous comments on discussion web sites, or narrowly targeted email attacks rather than broadcast spam.
When a message is presented to the user, in either an email client or a web browser, the user interface can provide visual cues to help the user decide whether the message is legitimate.
Current web browsers reflect information about the source and integrity of a web page through a set of visual cues. For example, the address bar at the top of the window displays the URL of the retrieved web page. A lock icon, typically found in the status bar, indicates whether the page was retrieved through an encrypted, authenticated connection. These cues are currently the most widely deployed and most user-accessible defenses against phishing, and security advisories about phishing warn users to pay close attention to them at all times. , , 
A general problem with the presentation of security cues is that users may disregard them, or attribute their presence to causes other than malicious attack. We observed this effect recently while developing a new authentication mechanism for logging in to web sites through an untrusted, public Internet terminal. Instead of requesting a secret password through the untrusted terminal (where it may be recorded by a key logger), authentication is performed on the user's cell phone using SMS messages and WAP browsing. To defend this approach against spoofing, however, it was necessary to associate a unique session name with the login attempt.
The user's only task was to confirm that the session name displayed in the untrusted web browser was the same as the session name displayed on the cell phone. In a user study of 20 users, however, the error rate for this confirmation was 30%. In other words, out of 20 times that we simulated an attack in which the session name on the phone differed from the session name on the terminal, users erroneously confirmed the session 6 times—giving the attacker access to their account.
Some users erred simply because they had stopped paying attention to the session names. Others made telling comments:
"There must be a bug because the session name displayed in the computer does not match the one in the mobile phone."
"The network connection must be really slow because the session name has not been displayed yet."
We subsequently changed the user interface design so that instead of simply approving the session name (Yes or No), the user is obliged to choose the session name from a short list of choices. Not surprisingly, the error rate dropped to zero, because the new design forces users to attend to the security cue and prevents them from rationalizing away discrepancies.
eBay's Account Guard (Figure 7) puts a site identity indicator into a dedicated toolbar.  Account Guard separates the Internet into three categories, described next.
Web sites truly belonging to eBay or PayPal, indicated by a green icon
Known spoofs of eBay or PayPal, indicated by a red icon
All other sites, indicated by a neutral gray icon
One problem with this approach is its lack of scalability. Of course, phishing attacks are not limited to eBay and PayPal. As of October 2004, the Anti-Phishing Working Group has collected attacks targeted at customers of 39 different organizations. It is impossible to cram all the possible toolbars, each representing a single organization, into a single browser. A better approach would be a single toolbar, created and managed by a single authority such as VeriSign or TRUSTe, to which organizations could subscribe if they have been, or fear becoming, victims of phishing attacks. VeriSign might do this right away by rolling out a toolbar that automatically certifies all members of its VeriSign Secured Seal program. 
Figure 7. eBay Account Guard toolbar
SpoofStick (Figure 8) is a browser extension that helps users parse the URL and detect URL spoofing by displaying only the most relevant domain information on a dedicated toolbar. For example, when the current URL is http://firstname.lastname@example.org, SpoofStick displays "You're on 10.19.32.4". When the current URL is http://www.citibank.com.intl-en.us, SpoofStick displays "You're on intl-en.us". Because it uses a large, colorful font, this toolbar is presumably easier for users to notice. But SpoofStick cannot solve the similar-domain-name problem: is ebay-members-security.com a domain owned by eBay, or is mypaypal.com a legitimate domain for PayPal? If the user's answer to either of these questions is yes, then the user will be tricked even with SpoofStick installed. Moreover, it is unknown whether seeing an IP address instead of a domain name raises sufficient suspicion in users' minds, because some legitimate sites also use bare IP addresses (e.g., Google caches).
Figure 8. SpoofStick toolbar
In order to address the problem of faked cues, Ye and Smith have proposed synchronized random dynamic boundaries.  With this approach, all legitimate browser windows change their border colors together at random intervals. Because a spoofed window generated by a remote site has no access to the random value generated on the local machine, its border does not change synchronously with the legitimate window borders. This approach was considered for inclusion in the Mozilla web browser, but was dropped out of concern that users wouldn't understand it (see Chapter 28).
A related approach, proposed by Tygar and Whitten,  is personalized display, in which legitimate browser windows are stamped with a personal logo, such as a picture of the user's face. The same principle can be used to distinguish legitimate web pages from phishing attacks. For example, Amazon and Yahoo! greet registered users by name. Anti-phishing advisories suggest that an impersonal email greeting should be treated as a warning sign for a potential spoofed email.  PassMark goes even further, by displaying a user-configured image as part of the web site's login page, so that the user can authenticate the web site at the same time that the web site authenticates the user. 
Personalization is much harder to spoof, but requires more configuration by the user. Configuration could be avoided if the web site automatically chose a random image for the user, but a system-chosen image may not be memorable. Another question about personalization is whether the lack of personalization in a phishing attack would raise sufficient warning flags in a user's mind. The absence of a positive cue like personalization may not trigger caution in the same way that the presence of a negative cue, like a red light in a toolbar, does.
Phishing depends on a user not only being deceived but also acting in response to persuasion. As a result, security advisories try to discourage users from performing potentially dangerous actions. For example, most current phishing attacks use email messages as the initial bait, in order to trick the recipient into clicking through a link provided in the email, which points to a phishing server. Security tips suggest that the user should ignore links provided by email, and instead open a new browser and manually type the URL of the legitimate site.
This advice is unlikely to be followed. Considering the low frequency of phishing attacks relative to legitimate messages, this suggestion sacrifices the efficiency of hyperlinks in legitimate emails in order to prevent users from clicking misleading links in very few phishing emails.
In the final step of a successful phishing attack, the user's action is translated into a system operation. This is the last chance we have to prevent the attack. Unfortunately, because phishing does not exploit system bugs, the system operations involved in a phishing attack are perfectly valid. For example, it is ordinary to post information to a remote server. Warnings based solely on system operations will inevitably generate a high rate of false positive errors—that is, warning users about innocent actions (Figure 9). These false-positives eventually cause users to disable the warnings or simply to become habituated to "swatting" the warning away.
Figure 9. Warning based on system operations
A more interesting approach involves modifying the system operation according to its destination. Web password hashing applies this idea to defend against phishing attacks that steal web site passwords.  The browser automatically hashes the password typed by the user with the domain name to which it is being sent, in order to generate a unique password for each site—and hence sending useless garbage to a phishing site. Web password hashing assumes that users will type their passwords only into a password HTML element. But this element can be spoofed, and a sophisticated attack may be able to trick users into disclosing their passwords through other channels.
The most comprehensive solution thus far for stopping phishing at the user interface is SpoofGuard, a browser plug-in for Internet Explorer.  SpoofGuard addresses three of the four steps where phishing might be prevented.
At message retrieval time, SpoofGuard calculates a total spoof score for an incoming web page. The calculation is based on common characteristics of known phishing attacks, including:
Potentially misleading patterns in URLs, such as use of @
Similarity of the domain name to popular or previously visited domains, as measured by edit distance
Embedded images that are similar to images from frequently spoofed domains, as measured by image hashing
Whether links in the page contain misleading URLs
Whether the page includes password fields but fails to use SSL, as most phishing sites eschew SSL
At presentation time, SpoofGuard translates this spoof score into a traffic light (red, yellow, or green) displayed in a dedicated toolbar. Further, when the score is above a threshold, SpoofGuard pops up a modal warning box that demands the user's consent before it proceeds with displaying the page.
For the action step, SpoofGuard does nothing to modify the user's online behavior. The user is free to click on any links or buttons in the page, regardless of their spoof score.
SpoofGuard becomes involved again in the system operation step, however, by evaluating posted data before it is submitted to a remote server. The evaluation tries to detect whether sensitive data is being sent, by maintaining a database of passwords (stored as hashes) and comparing each element sent against the database. If a user's eBay password is sent to a site that isn't in ebay.com, then the spoof score for the interaction is increased. This evaluation is also linked with the detection of embedded images so that if the page also contained an eBay logo, the spoof score is increased still more. If the evaluation of the system operation causes the spoof score to exceed a certain threshold, then the post is blocked and the user is warned.
In general, however, SpoofGuard is an impressive step toward fighting phishing attacks at the client side.
Phishing attacks are likely to grow more sophisticated in the days ahead, and our defenses against them must continue to improve. Phishing succeeds because of a gap between the user's mental model and the true implementation, so promising technical solutions should try to bridge this gap, either by finding ways to visualize for the user details of implementation that would otherwise be invisible, or by finding ways to see the message from the user's point of view.
If technical solutions fail, we might ask whether there are legal or policy solutions. As a species of wire fraud, phishing is, of course, already illegal; no new legislation is required to prosecute an attacker. So, legal and policy solutions may have to restrict legitimate access instead, in order to make phishing attacks easier to detect or attackers easier to track down. One policy measure, already being undertaken by some companies, is to stop using email for critical communications with customers. AOL, one of the earliest targets of phishing attacks in the Internet era, has a unique message system for "Official AOL Mail" that cannot be spoofed by outsiders or other AOL members. More recently, eBay has responded to the spate of phishing attacks against it by setting up a private webmail system, "My Messages," for sending unspoofable messages to its users.
The success of phishing suggests that users authenticate web sites mainly by visual inspection: looking at logos, page layout, and domain names. The web browser can improve this situation by digging up additional information about a site and making it available for direct visual inspection. How many times have I been to this site? How many other people have been to this site? How long has this site existed on the Web? How many other sites link to it, according to a search engine like Google? Reputation is much harder to spoof than mere visual appearance. Authentication by visual inspection would be easier and more dependable if these additional visual cues were not all buried in the periphery of the web browser, but were integrated into the content of the page, in the user's locus of attention.
Another potential opportunity arises in the action step of an online interaction. A phishing attack is harmless unless the user actually does something with it. If earlier analysis suggests that the risk of phishing is high, then the system can suggest alternative safe paths ("Use this bookmark to go to the real eBay.com"), or ask the user to choose which site they really want to receive the information ("eBay.com in California, or 126.96.36.199 in South Korea?").
The ideal defense against phishing might be an intelligent security assistant that can perceive and understand a message in the same way the user does so that it can directly compare the user's probable mental model against the real implementation and detect discrepancies. This ideal is likely to be a long way off. In the meantime, phishing will remain a problem that must be tackled by both a user and a computer, with an effective user interface in between.
Editor's note: Robert Miller and Min Wu contributed the material contained in this excerpt from Security & Usability. This excerpt is one of thirty-four essays in the book that cover authentication, privacy and anonymity, secure systems, commercialization, and more.
Robert Miller is an assistant professor in MIT's Department of Electrical Engineering and Computer Science and a member of the MIT Computer Science and Artificial Intelligence Laboratory. He received his Ph.D. in computer science from Carnegie Mellon University in 2002. His research concerns intelligent user interfaces, end-user programming, and applications of usability to security, including authentication, secure email, and network visualization.
Min Wu is a Ph.D. candidate in electrical engineering and computer science at MIT. He received his M.S. in electrical engineering and computer science at MIT in 2001. He is interested in different techniques to deal with Internet fraud.
 Anti-Phishing Working Group, "Phishing Attack Trends Report, April 2004"; http://antiphishing.org/APWG_Phishing_Attack_Report-Apr2004.pdf.
 Bob Sullivan, "Consumers Still Falling for Phish," MSNBC (July 28, 2004); http://www.msnbc.msn.com/id/5519990/.
 Neil Chou, Robert Ledesma, Yuka Teraguchi, and John C. Mitchell, "Client-Side Defense Against Web-Based Identity Theft," 11th Annual Network and Distributed System Security Symposium (2004); http://theory.stanford.edu/people/jcm/papers/spoofguard-ndss.pdf.
 Anti-Phishing Working Group, "eBay—NOTICE eBay Obligatory Verifying—Invalid User Information" (March 9, 2004); http://www.antiphishing.org/phishing_archive/eBay_03-09-04.htm.
 Bruce Schneier, "Semantic Attacks: The Third Wave of Network Attacks," Crypto-Gram Newsletter (Oct. 15, 2000); http://www.schneier.com/crypto-gram-0010.html#1.
 Anti-Phishing Working Group, "US Bank—Maintenance Upgrade" (July 6, 2004); http://www.antiphishing.org/
 Anti-Phishing Working Group, "Citibank—Your Citibank Account!" (July 13, 2004); http://www.antiphishing.org/phishing_archive/07-13-04_Citibank_(your_Citibank_account!).html.
 EarthLink Toolbar: Featuring ScamBlocker; http://www.earthlink.net/earthlinktoolbar/download/.
 Tropical Software Secure Browser; http://www.tropsoft.com/secbrowser/.
 F-SECURE, "F-Secure Virus Descriptions: Bagle.N"; http://www.f-secure.com/v-descs/bagle_n.shtml.
 Anti-Phishing Working Group, "MBNA—MBNA Informs You!" (Feb. 24, 2004); http://www.antiphishing.org/phishing_archive/MBNA_2-24-04.htm.
 eBay Inc., "Email and Websites Impersonating eBay"; http://pages.ebay.com/ help/confidence/isgw-account-theft-spoof.html.
 Federal Bureau of Investigation, Department of Justice, "FBI Says Web 'Spoofing' Scams Are a Growing Problem" (2003); http://www.fbi.gov/pressrel/pressrel03/spoofing072103.htm.
 PayPal Inc., "Security Tips"; http://www.paypal.com/cgi-bin/webscr?cmd=p/gen/fraud-prevention-outside.
 J. D. Tygar and Alma Whitten, "WWW Electronic Commerce and Java Trojan Horses" Proceedings of the Second USENIX Workshop on Electronic Commerce (1996).
 Edward W. Felten, Dirk Balfanz, Drew Dean, and Dan S. Wallach, "Web Spoofing: An Internet Con Game," 20th National Information Systems Security Conference (1996).
 Zishuang Ye, Yougu Yuan, and Sean Smith, Web Spoofing Revisited: SSL and Beyond, Technical Report TR2002-417, Dartmouth College (2002).
 eBay toolbar; http://pages.ebay.com/ebay_toolbar/.
 VeriSign Secured Seal Program; http://www.verisign.com/products-services/security-services/secured-seal/index.html.
 Zishuang Ye and Sean Smith, "Trusted Paths for Browsers," ACM Transactions in Information Systems Security 8:2 (May 2005), 153–186..
 Tygar and Whitten.
 eBay, Inc., "Tutorial: Spoof (fake) Emails"; http://pages.ebay.com/education/spooftutorial/.
 PassMark Security; http://www.passmarksecurity.com/twoWay.jsp.
 Dan Boneh, John Mitchell, and Blake Ross, "Web Password Hashing," Stanford University; http://crypto.stanford.edu/PwdHash/.
 Chou et al.
Return to the O'Reilly Network
Copyright © 2009 O'Reilly Media, Inc.