Do the following twice, using a separate piece of paper for each "information request" PART I State a reasonably specific piece of information that you would like to learn by searching the web using a search engine. I haven't defined "reasonably specific" but I am ruling out questions like "does God exist". An example of a reasonably specific and not very difficult request (albeit probably not very interesting) would be something like "what is the name of the city/town in which George Bush was born". A harder example might be "who was the first Canadian ambassador to Egypt". Note that these examples have unique answers and one suspects (but I didn't try it) that the information is somewhere on the web. Choose a small set of (i.e. at most 5 or 6) keywords from which a user can start to search the web. Estimate the anticipated difficulty of your request. For example: simple: expect user to obtain desired answer by looking at the first few results on the first search moderate: will take 2-3 searches and/or some browsing of a number of sites to obtain answer difficult: expect many searches (using new keywords) and/or significant amount of browsing In the case of difficult requests, it shouldn't be the case that you have deliberately chosen poor keywords but rather that it seems hard to find the right keywords in advance of actually starting the search. PART II You will randomly choose two requests (form the set of requests generated by PART I) and attempt to answer the request keeping a detailed log of your actions. For example, for each time you issued a search engine request, what were the keywords used and what sites (returned by the search engine or by further browsing) were accessed. Due: Part I: Monday, October 20 at start of seminar. Part II: Thursday, October 29 at start of tutorial