=========================================================================== CSC 363H Lecture Summary for Week 1 Spring 2007 =========================================================================== -------------------------- Administrative information -------------------------- Course information sheet. Lectures: - each week, readings from textbook - read and understand basic material and bring questions - lectures will go over basic material more quickly and spend more time on "intermediate"-level material Tutorials: - each week, exercises to work on - prepare solutions for tutorial - TA will discuss and work on solutions together with you - tutorials intended more like problem sessions than lectures - tutorials are integral part of course and not optional (material needs to be thought about and worked on to be learned) --------------------------- Computational Computability --------------------------- Outline (topics and textbook sections): 1. Turing machines: definitions, examples (3.1) 2. Variants, the Church-Turing thesis (3.2, 3.3) 3. Diagonalization, the Halting problem (4.1, 4.2) 4. Decidability and recognizability, examples (4.2, 5.1) 5. Reducibility, examples (5.1, 5.2) 6. Mapping reducibility, examples (5.3) --------------- Turing machines --------------- Motivation: - To answer the question "what can we compute?" we have to know what is "to compute?" - want a model that defines what can be computed. In a sense, will think of it as the "ultimate compute" in a sense. - The home computer we have is powerful enough but too complicated to make abstract study on. - Intuition as to what is "powerful enough" can come from a handful of questions that we ``know'' should be computable. The model we define better be able to compute those tasks. This immediately implies that Finite State Automata (which cannot even answer whether a string is of the form w#w for w in {0,1}*) are not powerful enough. - Goal: define "computation" as abstractly and generally as possible. Needs to be both simple and expressive. - Many possible formalizations: start with one, study it, then compare with others. Informal idea: similar to Finite State Automaton but with no limitation on access to input - one-way infinite "tape" divided into cells, or "squares" (each square holds one symbol) - read-write "head" positioned on one square at a time - "control" can be in one of a fixed number of states - initially, tape contains input (one symbol per square) and blanks, and head is on leftmost input symbol - current state and symbol read determine next state, symbol written, and movement of head (one square left or right) Differences between FSA and Turing machines: - TM can both read and write symbols. - Infinite tape. - Head can move left or right (convention: moving left on leftmost square leaves head where it is). - Special "accept" and "reject" states that stop computation immediately. Example: M_1 that accepts only strings of the form w#w for w in {0,1}*. Read first symbol and cross it off (replace with new symbol 'x'), move right until #, keep moving right until first non-x symbol to verify same as first symbol seen earlier (remembered through states), cross it off and go back to leftmost non-x symbol to repeat. If more than one # or different symbols or more symbols on one side than the other, reject; otherwise, accept. Formal definition: - A Turing machine is a 7-tuple (Q,S,T,d,q_0,q_accept,q_reject), where . Q is a finite set of "states" . S is the "input alphabet" ("blank" symbol _ not in S) . T is the "tape alphabet" (S subset of T, _ in T) . d : Q x T -> Q x T x {L,R} is the "transition function" . q_0 in Q is the "start state" (or "initial state") . q_accept in Q is the "accepting state" . q_reject in Q is the "rejecting state" (q_reject =/= q_accept) Reading for next class: sections 3.1, 3.2, 3.3