=========================================================================== CSC 363H Lecture Summary for Week 3 Fall 2005 =========================================================================== (Left from last time under the subject of "Variants") Non-deterministic Turing machines: - Allow transition function to specify more than one possible outcome for any state and symbol. Formally, d : Q x T -> P(Q x T x {L,R}), where P(A) is the "power set" of A, the set of all subsets of A, i.e., given state q and tape symbol a, d(q,a) gives a set of next states, symbols, and head movements (possibly empty). - For deterministic TMs, computation is "straight-line": initial config -> next config -> next config -> ... For non-deterministic TMs, think of initial config as root, and each config has finite number of next configs (possibly none), which yields a computation "tree". Convention: non-deterministic TM "accepts" if there is at least one path in computation tree that leads to q_accept (irrespective of what other paths do) -- intuitively, non-deterministic TMs carry out all computation paths "in parallel" and stop as soon as one path accepts. Non-deterministic TM (NTM) "rejects" if every path leads to q_reject. NTM "loops" if no path accepts and at least one path never halts. - Note: NTMs cannot be implemented, unlike regular TMs. - Equivalent to TM! Idea: go through all paths in computation tree in breadth-first fashion (depth-first doesn't work because we could get stuck in an infinite loop even though another path accepts). - READ: Theorem 3.16 on pp.150-151 (1st ed: 3.10 on pp.138-139). Note: You will be responsible for this material, in the sense that we may ask questions about it in any future tutorial exercice, assignment, term test, or the final exam. If there is anything in the book that you are unclear about, please ask questions during office hours or at the start of next lecture! Enumerators: - No input but two tapes and a special state q_print. Machine computes and whenever it enters state q_print, string written on second tape is "printed". Language is set of strings printed during computation (could be infinite if computation never halts). - Equivalent to TM: (Theorem 3.21 on p.135 -- 1st ed: 3.13 on p.141) . Given enumerator E for language A, construct TM M that recognizes A as follows: M = "On input w: 1. Run E; every time a string is "printed", compare it with w. 2. Accept if w is ever printed. Reject if E stops without printing w." M accepts input w iff w in A. . Given TM M that recognizes A, construct enumerator for A as follows: E = "Repeat for i = 1, 2, 3, ... 1. Run M for i steps on each input s_1, s_2, ..., s_i. 2. Print every string that is accepted." where s_1, s_2, s_2, ... is a systematic list of all strings in S* (e.g., in lexicographic order). E will eventually print every string accepted by M. Computing functions: - More "natural" notion of computing: TM works on input w and "produces" output f(w) (string left on tape when machine halts). Many possible formal definitions, depending on specific details. - Any function computable this way can also be "computed" by recognizing language { w#f(w) : w in S* } -- and any such recognizable language gives rise to a computable function. Other models: - Register machines, Post correspondence systems, recursive functions, etc.: all have unrestricted access to unlimited memory, and each step carries out only finite amount of work. - Given formal definitions of the different models, all have been shown equivalent to each other! ------------------------ The Church-Turing thesis ------------------------ "All reasonable models of computation are equivalent (reasonable means has access to unlimited resources, can only carry out finite amount of work in one step)." (A "thesis" rather than a "theorem" because states something about informal notion of "reasonable model of computation". Any formal model chosen can be proven equivalent to others.) In other words, any reasonable model of computation captures informal notion of "computation" precisely, i.e., there is a single, well-defined notion of "algorithm" independent of model used to define it. In particular, Turing machines are as powerful as any other model. Question: Is there some language that cannot be recognized by a TM? (We'll get back to this question a little later.) -------------------------- Turing machine conventions -------------------------- Turing machine algorithms described in stages, with indentation for blocks that represent loops. Each stage simple (and clear) enough that it is obvious it can be implemented on Turing machine. Description always starts with input, always a string. Use notation to represent string encoding of object O (e.g., represents string encoding of graph G). TM can easily decode such strings and reject automatically if input does not follow proper encoding. Description must always include clear, explicit conditions for accepting and rejecting. READ: Example 3.23 on pp.157-158 (1st ed: 3.14 on pp.145-147). ------------------- Decidable languages ------------------- A_DFA = { | B is a DFA that accepts input w } is decidable. - A TM can check is valid encoding of DFA (using any reasonable convention), then use tape to keep track of current state of B and position on string w while simulating B's transitions one by one (going back and forth to check description of B). At the end, easy to check if last state is accepting or rejecting. Using conventions above: On input : 1. Simulate B on w (use appropriate tape alphabet symbols to keep track of position of B on input w; use separate portion of tape to keep track of B's current state). 2. Accept if B accepts; reject if B rejects. Since DFA always halts, TM just described is a decider for A_DFA. Recall constructions for transforming NFA into corresponding DFA, and RE into corresponding DFA. These constructions are algorithmic so can be carried out by TMs (by Church-Turing thesis). This immediately gives that A_NFA = { | B is an NFA that accepts w } and A_RE = { | R is a RE that can generate w } are decidable. READ: Theorem 4.4 on p.168 (1st ed: 4.4 on p.154) for proof that E_DFA = { | B is a DFA that accepts no string } is decidable. A_TM = { | M is a TM that accepts input w } is recognizable. - There is a TM U (the "universal TM") that takes a reasonable encoding of M and its input w, and that carries out M's computation on w. U accepts if M accepts w, U rejects if M rejects w, and U goes into an infinite loop if M goes into an infinite loop on w. More formally: On input : 1. Simulate M on input w (use portion of tape to represent M's configuration -- state and content of M's tape, including head position -- and move back-and-forth between M's description and M's configuration for each simulation step). 2. Accept if M accepts; reject if M rejects. Difference from A_DFA: U could get stuck in infinite loop in stage 1 (if M never halts on w), so U recognizes A_TM but U does not decide A_TM -- this does not show that A_TM undecidable, just that U is not a decider for A_TM.