Trends in Functional Programming Volume 7
Sold by Midtown Scholar Bookstore, Harrisburg, PA, U.S.A.
AbeBooks Seller since 7 June 2002
Used - Soft cover
Condition: Used - Very good
Quantity: 1 available
Add to basketSold by Midtown Scholar Bookstore, Harrisburg, PA, U.S.A.
AbeBooks Seller since 7 June 2002
Condition: Used - Very good
Quantity: 1 available
Add to basketVery Good - Crisp, clean, unread book with some shelfwear/edgewear, may have a remainder mark - NICE PAPERBACK Standard-sized.
Seller Inventory # M1841501883Z2
Preface,
1 Proving Termination Using Dependent Types: the Case of Xor-Terms Jean-François Monin, Judicaël Courant,
2 Proving the Correctness of Algorithmic Debugging for Functional Programs Yong Luo, Olaf Chitil,
3 Systematic Synthesis of Functions Pieter Koopman, Rinus Plasmeijer,
4 A Purely Functional Implementation of ROBDDs in Haskell Jan Christiansen, Frank Huch,
5 Efficient Interpretation by Transforming Data Types and Patterns to Functions Jan Martin Jansen, Pieter Koopman, Rinus Plasmeijer,
6 Object-Oriented Programming in Dependent Type Theory Anton Setzer,
7 A Sharing Analysis for SAFE Ricardo Peña, Clara Segura, Manuel Montenegro,
8 Memory Usage Improvement Using Runtime Alias Detection Ryo Hanai, Tomoharu Ugawa, Masashi Yoneda, Masahiro Yasugi, Taiichi Yuasa,
9 A Model of Functional Programming with Dynamic Compilation and Optimization Martin Grabmüller,
10 Functional Concepts in C++ Rose H. Abdul Rauf, Ulrich Berger, Anton Setzer,
11 Resource-Based Web Applications Sebastian Fischer,
12 Extensible and Modular Generics for the Masses Bruno C. d. S. Oliveira, Ralf Hinze, Andres Löh,
13 When is an Abstract Data Type a Functor? Pablo Nogueira,
Proving Termination Using Dependent Types: the Case of Xor-Terms
Jean-François Monin, Judicaël Courant
Abstract: We study a normalization function in an algebra of terms quotiented by an associative, commutative and involutive operator (logical xor). This study is motivated by the formal verification of cryptographic systems, relying on a normalization function for xor-terms. Such a function is easy to define using general recursion. However, as it is to be used in a type theoretic proof assistant, we also need a proof of its termination. Instead of using a mixture of various rewriting orderings, we follow an approach involving the power of Type Theory with dependent types. The results are to be applied in the proof of the security API described in.
1.1 INTRODUCTION
This work originates in the verification of a cryptographic system in the Coq proof assistant. In the course of this verification, we modelized plaintext and encrypted data as first-order sorted terms. For instance, a typical term could be {K1 [direct sum] K2}H(KM,H(EXP,KP)) where {x}y denotes encryption of x using a key y, H(x,y) denotes the fingerprint (cryptographic checksum) of the pair (x, y), x [direct sum] y denotes the bitwise exclusive or of x and y. The cryptographic primitives {_}_ and H are commonly supposed to be perfect, and can thus be treated as free constructors. On the contrary, we need to deal with the equational theory of the exclusive or function as many (potential or effective) attacks are based on its algebraic properties.
Deduction in presence of a non-trivial equational theory in the general case is notoriously difficult: shows that the matching problem in the presence of an associative-commutative (AC) function symbol is NP-complete, even in very restricted cases, and AC matching is still an active research topic. In order to solve our particular problem, we only needed a way to define a canonical form for closed terms and a proof of existence of a canonical form for each term. More precisely, we needed a canonicalization function putting any term into its canonical form. On paper, we would use results from term rewriting theory in order to prove the existence of a normalization function with respect to some rewriting system. Unfortunately, the formalization of these results in Coq is only partial yet. Therefore we tried to apply the methods of terms rewriting theory to our problem but stopped when we realized the amount of work needed to complete the definition of the normalization function (section 1.2).
Then we observed that the existence of such a normalization function could be reduced to the existence of a normalization function for terms built over [direct sum] and 0 only. In other words, we can layer our terms into alternating levels of free and non-free constructors, and normalization of the overall term can be deduced from the normalization of all layers (section 1.3).
We formalized such a proof in Coq (section 1.4). Our results are to be applied in our proof of security of an API. However, the approach investigated here is not tied to the specific term algebra of this case study. We reflected this at several places in our formal development, by stating some key definitions and lemmas in an abstract setting, independently of any specific application.
1.2 BACKGROUND
1.2.1 The Need for a Canonicalization Function
In order to reason of our cryptographic system, we need to define some functions or properties over terms. In order to be consistent, we must ensure that these definitions are compatible with the equational theory E induced by commutativity, associativity and cancelation laws of [direct sum] and 0. We can define such a property P using one of the following ways:
• We define P on the whole set of terms (for instance by structural induction). We check afterwards that P is indeed compatible with equality, that is, we check that [MATHEMATICAL EXPRESSION OMITTED]
• We give a canonicalization function N for terms, i.e. a function N from terms to terms such that [MATHEMATICAL EXPRESSION OMITTED]. Then we define P over canonical forms only and extend it in the only compatible way. More formally, we first define some auxiliary predicate P' over terms (for instance by structural induction) and P(t) is defined as P'(N(t)). Thus P is compatible with E by construction.
In our case, the latter approach looks the only reasonable one. For instance, we needed to check whether some secret constant appears in a given term. Let us consider an example: it is clear that KM does not appear in {K1}H(EXP,KEK). But checking that it does not appear in KM [direct sum] {K1} H(EXP,KEK) [direct sum] KM is a bit more difficult, as you have to notice that, although KM syntactically is a subterm of this latter term, the occurences of KM have no semantic significance here because of the self-cancelation law. We therefore do not see how to give a definition for this notion by structural induction on the whole set of terms. On the other hand, once we applied all possible simplication rules (self-cancelation and neutral element cancelation, modulo associativity and commutativity), we just need to check a syntactical occurence of KM.
Therefore, we need to define a canonicalization function N over terms which is also a simplification function, i.e. it ensures that for all term t, no simplification rule applies to N(t).
1.2.2 What Term Rewriting Theory Says
The equational theory of [direct sum] and 0 can be described by the following equations:
[MATHEMATICAL EXPRESSION OMITTED] (1.1)
[MATHEMATICAL EXPRESSION OMITTED] (1.2)
Neutral element x [direct sum] 0 [??] x (1.3)
Involutivity x [direct sum] x [??] 0 (1.4)
Term rewriting theory modulo associativity and commutativity (AC), as described in would say that this equational theory is generated by the following rewrite system, where [direct sum] is an associative-commutative symbol:
O [direct sum] x -> x (1.5)
x [direct sum] x -> O (1.6)
If this system is terminating and critical pair converge, the existence of a unique normal form is ensured up to associativity and commutativity. In order to get a unique normal form, one just has to sort repeated applications of ? according to a given arbitrary total ordering.
Tools for automatically checking termination and critical pair convergence have been developed for years by the term rewriting community. For instance, the tool CiME checks these properties instantaneously (code figure 1.1).
1.2.3 Applicability to Type Theory
However, formally giving such a normalization function in Type Theory and formally proving its correctness is much more challenging.
The first show-stopper is termination. In Type Theory, all functions are total and terminating by construction. This means that general fixpoints are not allowed for defining functions. In order to achieve this fundamental property while keeping a good expressive power, Type Theory limits recursion to higher-order primitive recursion, and structural recursion over all inductively defined types. The theory and support tools automatically provide combinators for naturals numbers, lists and user-defined inductive types, which are enough in most cases. Alternatively, in recent versions of Coq, the user can define a function recursively, provided she points out to the system a parameter of this function which structurally decreases at each recursive call. A special and very important case is well-founded recursion: the decreasing argument is a proof, formalized as an inductive object, witnessing that a given value is accessible for some binary relation. In complex cases, the latter approach is by far more convenient. But there is no such thing as a free lunch: while type-checking ensures totality and strong normalization, the user has to design the right types and provide the right arguments. Of course, standard libraries about well-founded relations may help here. Besides, one can consider additional tools or methodology such as those developed by Bertot and Balaa, or Bove and Capretta.
In the case of our canonicalization function, using standard rewriting arguments is surprisingly difficult in a proof assistant such as Coq:
• Although some theoretical work addresses the addition of rewriting to the Calculus of Constructions, this is yet to be implemented.
• Some work provides ways to define tactics for reasonning over associative-commutative theories, but they only provide ways to normalize given terms, not to define a normalization function.
We therefore tried to define our own specific rewriting relation corresponding to the defining equations of [??], but found this approach really costly:
• We had to give a well-founded ordering. As neither recursive path ordering nor lexicographic path ordering library was available in Coq, we used the lexicographic combination of a partial ordering ≤1 with a total ordering ≤2, where ≤ 1 is a polynomial ordering, and ≤ 2 is a lexicographic ordering. Although ≤ 2 is not well-founded, the set of terms having a given weight for the polynomial defining ≤1 is finite, therefore we could prove in Coq that the lexicographic combination of ≤1 and ≤ 2 is finite.
• Then we defined a rewriting relation [??]. The difficult part here is to take into account commutativity and associativity. In order to avoid AC-matching issues, we decided to add a rewriting rule for associativity and a conditionnal rewriting rule for commutativity (x [direct sum] y would rewrite to y [direct sum] x if and only if x is smaller than y). Moreover, we had to complete our rewriting system in order to close critical pairs such as x [direct sum] x [direct sum] y, which could be rewritten to y or to x [direct sum] (x [direct sum] y).
• We still had to define the normalization function. As mentioned above, the definition of such a function using well-founded induction in Coq is difficult. Therefore we stopped there and used another approach instead.
• Once this would be done, we would still have to prove that the transitive closure of our rewriting relation is irreflexive, that our normalization function is sound with respect to it, and that it computes normal forms. Essentially, the main results to prove here would be [MATHEMATICAL EXPRESSION OMITTED] and [MATHEMATICAL EXPRESSION OMITTED].
1.3 OUR APPROACH
1.3.1 Intuitive Presentation
Although implementing normalization in Coq is hard in general, it is much easier in some particular cases. For instance consider the term [MATHEMATICAL EXPRESSION OMITTED]. In the examples considered here, K1 and K2 are constants (they are taken from our crytographic application). Normalizing t1 is easy as it contains only constants and the exclusive or function. Such terms can easily be normalized as follows: we first compute the list of the constants it contains, here we get [K1; K2; K1], then we sort this list with respect to some total ordering on these constants, here we get, say, [K2; K1; K1]. Then we easily detect repeated elements and apply self-cancelation: we therefore get the list [K2] hence the normalized form of K1 [direct sum] K2 [direct sum] K1 is K2.
Consider now the following term t2, where EXP and KM are two other constants, while E is a free binary constructor – E(u, v) stands for the encryption of u by v, usually denoted by {u}v:
[ILLUSTRATION OMITTED]
We first normalize the subterms K1 [direct sum] K2 [direct sum] K1 and K2 [direct sum] 0. Replacing them in the initial term, we get:
[ILLUSTRATION OMITTED]
Now, both occurrences of {K2}KM [direct sum] EXP behave as constants with respect to normalization. Indeed, they are in normal form and moreover the applications of the function symbols E at their heads act as a barrier preventing their subterms from any interaction with outside subterms with respect to rewriting. Therefore, we can normalize the whole term t2 as previously for t1 by sorting the list [{K2}KM[direct sum]EXP; KP; {K2}] with respect to some total ordering, then detect repeated elements, apply self-cancelation and get KP as normal form.
We generalize this approach to all terms as follows, using typical features of Type Theory. In a first stage, the term to be normalized is layered in such a way that each level is built up from terms belonging to the previous level. These levels alternate between layers built up using only [direct sum] constructors and layers built up using only other constructors, as lasagnas alternate between pasta-only layers and sauce layers (mixed up to your taste of tomato, meat, and cheese – in fact anything but pasta). At the term level, this stage is nothing else than constructor renaming. In a second stage, layers are normalized bottom-up. Normalizing a [direct sum]-layer only requires us to sort a list and remove pairs of identical items, while normalization of a non-[direct sum]-layer is just identity.
The second stage is easy, although we had to avoid some pitfalls (see section 1.4.3). Surprisingly, the first stage requires more work than expected: the new version of a term has a much more precise type whose computation turns out to be non-trivial (see section 1.4.2). In the whole development, we need the full power of programming with polymorphic dependent types: each layer owns its specific ordering relation, which depends on the ordered structure of the previous layer.
1.3.2 Layering Types
Let us now outline our approach in more detail. We call T the first-order inductive type of terms to be normalized. Let {[direct sum],0} ψN be the set of constructors of T. For instance, in our application, we have N = {PC,SC,E,Hash} with
[ILLUSTRATION OMITTED]
where public_const and secret_const are suitable enumerated types.
As explained in section 1.3.1, we want to split a T-term into layers. Moreover, we have to state and prove a number of functions and lemmas for each layer. For modularity reasons, it is better to handle each layer separately. Each layer provides a data type, a comparison function and a sorting function on this type, as well as correctness lemmas. Intuitively, it could be seen as a module in the sense of Harper, Lillibridge and Leroy, or better: a functor, because each layer relies on the interface of the previous layer. HLL modules have been adapted to the Calculus of Inductive Constructions and implemented in Coq. But our case is out of their scope, because here the number of layers is a dynamic notion which depends on a piece of data, namely the term we normalize. Therefore we only use the features of basic CIC, which are dependent, polymorphic and inductive types.
Excerpted from Trends in Functional Programming Volume 7 by Henrik Nilsson. Copyright © 2007 Intellect. Excerpted by permission of Intellect Ltd.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
"About this title" may belong to another edition of this title.
We guarantee the condition of every book as it is described on the Abebooks website. If you are dissatisfied with your purchase (Incorrect Book/Not as Described/Damaged) or if the order has not arrived, you may be eligible for a refund within 30 days of the estimated delivery date. If you have changed your mind about a book that you have ordered, please use the Ask bookseller a question link to contact us and our customer service team will typically respond within 2 business days.
If you are a consumer you can cancel the contract in accordance with the following. Consumer means any natural person who is acting for purposes which are outside his trade, business, craft or profession.
INFORMATION REGARDING THE RIGHT OF CANCELLATION
Statutory Right to cancel
You have the right to cancel this contract within 14 days without giving any reason.
The cancellation period will expire after 14 days from the day on which you acquire, or a third party other than the carrier and indicated by you acquires, physical possession of the the last good or the last lot or piece.
To exercise the right to cancel, you must inform us, Midtown Scholar Bookstore, 3401 North Sixth Street, 17110, Harrisburg, Pennsylvania, U.S.A., +1 7172362665, of your decision to cancel this contract by a clear statement (e.g. a letter sent by post, fax or e-mail). You may use the attached model cancellation form, but it is not obligatory. You can also electronically fill in and submit a clear statement on our website, under "My Purchases" in "My Account". If you use this option, we will communicate to you an acknowledgement of receipt of such a cancellation on a durable medium (e.g. by e-mail) without delay.
To meet the cancellation deadline, it is sufficient for you to send your communication concerning your exercise of the right to cancel before the cancellation period has expired.
Effects of cancellation
If you cancel this contract, we will reimburse to you all payments received from you, including the costs of delivery (except for the supplementary costs arising if you chose a type of delivery other than the least expensive type of standard delivery offered by us).
We may make a deduction from the reimbursement for loss in value of any goods supplied, if the loss is the result of unnecessary handling by you.
We will make the reimbursement without undue delay, and not later than 14 days after the day on which we are informed about your decision to cancel with contract.
We will make the reimbursement using the same means of payment as you used for the initial transaction, unless you have expressly agreed otherwise; in any event, you will not incur any fees as a result of such reimbursement.
We may withhold reimbursement until we have received the goods back or you have supplied evidence of having sent back the goods, whichever is the earliest.
You shall send back the goods or hand them over to us or Midtown Scholar Bookstore, 3401 North Sixth Street, Midtown Scholar Bookstore - Abebooks returns, 17110, Harrisburg, Pennsylvania, U.S.A., +1 7172362665, without undue delay and in any event not later than 14 days from the day on which you communicate your cancellation from this contract to us. The deadline is met if you send back the goods before the period of 14 days has expired. You will have to bear the direct cost of returning the goods. You are only liable for any diminished value of the goods resulting from the handling other than what is necessary to establish the nature, characteristics and functioning of the goods.
Exceptions to the right of cancellation
The right of cancellation does not apply to:
Model withdrawal form
(complete and return this form only if you wish to withdraw from the contract)
To: (Midtown Scholar Bookstore, 3401 North Sixth Street, 17110, Harrisburg, Pennsylvania, U.S.A., +1 7172362665)
I/We (*) hereby give notice that I/We (*) withdraw from my/our (*) contract of sale of the following goods (*)/for the provision of the following goods (*)/for the provision of the following service (*),
Ordered on (*)/received on (*)
Name of consumer(s)
Address of consumer(s)
Signature of consumer(s) (only if this form is notified on paper)
Date
* Delete as appropriate.
* * * All books are packed with great care! * * * DOMESTIC SHIPPING OPTIONS for standard-sized books: USPS PRIORITY MAIL (takes business 3-7 days to arrive in the continental U.S.) - $9.00 for the 1st book and $6.00 for a 2nd book. ... BOOK RATE SHIPPING ("media mail", takes 1-4 weeks to arrive in the continental U.S.; please note that media mail delivery to Alaska, Hawaii, Puerto Rico, APO/FPO addresses, and US territories can take 6-8 weeks) - $6.00 for the 1st book and $3.00 for each additional book. MEDIA MAIL BOOKS ARE TYPICALLY SEALED IN A HIGH-QUALITY PLASTIC BAG AND THEN PACKAGED IN A STRONG CARDBOARD BOX. Some extremely small, very light items receive a First-Class upgrade and may ship in a First-Class bubblewrap-lined mailer. ... AIRMAIL SHIPPING TO CANADA: Price varies based on rate. Please contact us for details. Oversized or heavy items may require additional postage.... OTHER INTERNATIONAL AIRMAIL SHIPPING: Shipping by DHL service whenever possible with typical delivery in 5-14 business days. Oversized or heavy items may require additional postage.... If your book is HEAVY or OVERSIZED, we will contact you to let you know extra shipping is required.
| Order quantity | 7 to 21 business days | 3 to 7 business days |
|---|---|---|
| First item | £ 4.56 | £ 6.84 |
Delivery times are set by sellers and vary by carrier and location. Orders passing through Customs may face delays and buyers are responsible for any associated duties or fees. Sellers may contact you regarding additional charges to cover any increased costs to ship your items.