书目名称 | Progress in Cryptology – LATINCRYPT 2017 | 副标题 | 5th International Co | 编辑 | Tanja Lange,Orr Dunkelman | 视频video | | 丛书名称 | Lecture Notes in Computer Science | 图书封面 |  | 描述 | .This book constitutes the refereed post-conference proceedings of the 5th International Conference on Cryptology and Information Security in Latin America, LATINCRYPT 2017, held in Havana, Cuba, in September 2017..The 20 papers presented were carefully reviewed and selected from 64 submissions. They are organized in the following topical sections: security protocols; public-key implementation; cryptanalysis; theory of symmetric-key cryptography; multiparty computation and privacy; new constructions; and adversarial cryptography.. | 出版日期 | Conference proceedings 2019 | 关键词 | authentication; computer architecture; computer crime; computer networks; computer science; cryptography; | 版次 | 1 | doi | https://doi.org/10.1007/978-3-030-25283-0 | isbn_softcover | 978-3-030-25282-3 | isbn_ebook | 978-3-030-25283-0Series ISSN 0302-9743 Series E-ISSN 1611-3349 | issn_series | 0302-9743 | copyright | Springer Nature Switzerland AG 2019 |
1 |
An Obsession with Definitions |
Phillip Rogaway |
|
Abstract
Many people seem to think that cryptography is all about creating and analyzing cryptographic schemes. This view ignores the centrality of . in shaping the character of the field. More than schemes or their analysis, it is definitions that most occupy my thoughts. In this paper, written to accompany an invited talk at Latincrypt 2017, I try to explain my own fascination with definitions. I outline a few of the definitions I’ve recently worked on—garbling schemes, online AE, and onion encryption—and provide some general advice and comments about the definitional enterprise.
|
2 |
Anonymous Single-Round Server-Aided Verification |
Elena Pagnin,Aikaterini Mitrokotsa,Keisuke Tanaka |
|
Abstract
Server-Aided Verification (.) is a method that can be employed to speed up the process of verifying signatures by letting the verifier outsource part of its computation load to a third party. Achieving fast and reliable verification under the presence of an untrusted server is an attractive goal in cloud computing and internet of things scenarios. In this paper, we describe a simple framework for . where the interaction between a verifier and an untrusted server happens via a single-round protocol. We propose a security model for . that refines existing ones and includes the new notions of . . and .. In addition, we apply our definitional framework to provide the first generic transformation from any signature scheme to a single-round . scheme that incorporates verifiable computation. Our compiler identifies two independent ways to achieve .-anonymity: ., through the privacy of the verifiable computation scheme, or ., through the adaptibility of the signature scheme..Finally, we define three novel instantiations of . schemes obtained through our compiler. Compared to previous works, our proposals are the only ones which simultaneously achieve existential unforgeability and soundness against collusion.
|
3 |
Secure Channels and Termination: The Last Word on TLS |
Colin Boyd,Britta Hale |
|
Abstract
Secure channels are one of the most pivotal building blocks of cryptography today. Internet connections, secure messaging, protected IoT data, etc., all rely upon the security of the underlying channel. In this work we define channel protocols, as well as security for channels constructed from stateful length-hiding authenticated encryption (stLHAE) schemes. Furthermore, we initiate the concept of . where, upon receipt of a signifying message, a receiver is guaranteed to have received every message that has been sent, and will ever be sent, on the channel. We apply our results to real-world protocols, linking the channel environment to previous analyses of TLS 1.2, and demonstrating that TLS 1.2 achieves secure termination via fatal alerts and . messages, per the specification of the Alert Protocol.
|
4 |
Improved Security Notions for Proxy Re-Encryption to Enforce Access Control |
Ela Lee |
|
Abstract
Proxy Re-Encryption (PRE) allows a ciphertext encrypted under Alice’s public key to be transformed to an encryption under Bob’s public key without revealing either the plaintext or the decryption keys. PRE schemes have clear applications to cryptographic access control by allowing outsourced data to be selectively shared to users via re-encryption to appropriate keys. One concern for this application is that the server should not be able to perform unauthorised re-encryptions. We argue that current security notions do not adequately address this concern. We revisit existing definitions for PRE, starting by challenging the concept of unidirectionality, which states that re-encryption tokens from . to . cannot be used to re-encrypt from . to .. We strengthen this definition to reflect realistic scenarios in which adversaries may try to reverse a re-encryption by retaining information about prior ciphertexts and re-encryption tokens. We then strengthen the adversarial model to consider malicious adversaries that may collude with corrupt users and attempt to perform unauthorised re-encryptions; this models a malicious cloud service provider aiming to subvert the re-encryption process to leak sensitive data. Finally we revisit the notion of authenticated encryption for PRE. This currently assumes the same party who created the message also encrypted it, which is not necessarily the case in re-encryption. We thus introduce the notion of . to determine who encrypted the message (initiated a re-encryption) and show how to fufil this requirement in practice.
|
5 |
Optimal 2-3 Chains for Scalar Multiplication |
Cristobal Leiva,Nicolas Thériault |
|
Abstract
Using double-base chains to represent integers, in particular chains with bases 2 and 3, can be beneficial to the efficiency of scalar multiplication. However, finding an optimal 2-3 chain as long been thought to be more expensive than the scalar multiplication itself, complicating the use of 2-3 chains in practical applications where the scalar is used only a few time (as in the Diffie-Hellman key exchange)..In the last few years, important progress has been made in obtaining the shortest possible double-base chain for a varying integer .. In 2008, Doche and Habsieger used a binary-tree based approach to get a (relatively close) approximation of the minimal chain. In 2015, Capuñay and Thériault presented the first deterministic polynomial-time algorithm to compute the minimal chain for a scalar, but the complexity of . is too high for use with a varying scalars. More recently, Bernstein, Chuengsatiansup, and Lange used a graph-based approach to obtain an algorithm with running time ...In this work, we adapt the algorithm of Capuñay and Thériault to obtain minimal chains in . bit operations and . bits of memory. This allows us to obtain minimal chains for 256-bits integers in the 0.280 ms range, making it useful to reduce scalar multiplication costs randomly-selected scalars..We also show how to extend the result to other types of double-base and triple-base chains (although the complexity for triple-base chains is cubic instead of quadratic). In the case of environments with restricted memory, our algorithm can be adapted to compute the minimal chain in . bit operations with only . bits of memory.
|
6 |
Curve25519 for the Cortex-M4 and Beyond |
Hayato Fujii,Diego F. Aranha |
|
Abstract
We present techniques for the implementation of a key exchange protocol and digital signature scheme based on the Curve25519 elliptic curve and its Edwards form, respectively, in resource-constrained ARM devices. A possible application of this work consists of TLS deployments in the ARM Cortex-M family of processors and beyond. These devices are located towards the lower to mid-end spectrum of ARM cores, and are typically used on embedded devices. Our implementations improve the state-of-the-art substantially by making use of novel implementation techniques and features specific to the target platforms.
|
7 |
Implementing the NewHope-Simple Key Exchange on Low-Cost FPGAs |
Tobias Oder,Tim Güneysu |
|
Abstract
Lattice-based cryptography is one of the most promising candidates being considered to replace current public-key systems in the era of quantum computing. In 2016 Alkim, Ducas, Pöppelmann, and Schwabe proposed the lattice-based key exchange scheme .. The scheme has gained some popularity in the research community as it is believed to withstand attacks by quantum computers with a comfortable security margin and provides decent efficiency and low communication cost. In this work, we evaluate the efficiency of . on reconfigurable hardware. We provide the up to our knowledge first field-programmable gate array (FPGA) implementation of . that is a slight modification of . proposed by the authors themselves in 2016. . is basically . with different error correction mechanism. Our implementation of the client-side scheme requires 1,483 slices, 4,498 look-up tables (LUTs), and 4,635 flip-flops (FFs) on low-cost Xilinx Artix-7 FPGAs. The implementation of the server-side scheme takes 1,708 slices, 5,142 LUTs, and 4,452 FFs. Both cores use only two digital signal processors (DSPs) and four 18 Kb block memories (BRAMs). The implementation has a constant execution time to prevent timing attacks. The server-side operations take 1.4 ms and the client-side operations take 1.5 ms.
|
8 |
Theoretical Security Evaluation Against Side-Channel Cube Attack with Key Enumeration |
Haruhisa Kosuge,Hidema Tanaka |
|
Abstract
Side-channel cube attack (SCCA) is executed in a situation where an adversary can access some information about the internal states of the cipher. The adversary can obtain a system of linear equations by a set of chosen plaintexts called cube and recover the secret key using the system. Error tolerance is a challenging problem in SCCA. To recover the secret key based on likelihoods under an error-prone environment, we propose SCCA with key enumeration (SCCA-KE). Precise likelihoods are computed to obtain lists for sub-key candidates and an optimal list for the complete key candidate is generated by key enumeration. Then, we propose an evaluation method for SCCA-KE which includes information-theoretic evaluation and experimental evaluation by rank estimation. We apply the proposed evaluation method to PRESENT and show some conditions required to thwart SCCA-KE in realistic assumptions. Using the evaluation method, we can consider countermeasures with a sufficient security margin.
|
9 |
On the Hardness of the Mersenne Low Hamming Ratio Assumption |
Marc Beunardeau,Aisling Connolly,Rémi Géraud,David Naccache |
|
Abstract
In a recent paper [.], Aggarwal, Joux, Prakash, and Santha (AJPS) describe an ingenious public-key cryptosystem mimicking NTRU over the integers. This algorithm relies on the properties of Mersenne primes rather than polynomial rings. The security of the AJPS cryptosystem relies on the conjectured hardness of the Mersenne Low Hamming Ratio Assumption, defined in [.]..This work shows that AJPS’ security estimates are too optimistic and describes an algorithm allowing to recover the secret key from the public key much faster than foreseen in [.]..In particular, our algorithm is . (within the reach of the computational capabilities of a large organization), at least for the parameter choice . conjectured in [.] as corresponding to a . security level. The algorithm is fully parallelizable.
|
10 |
Energy-Efficient ARM64 Cluster with Cryptanalytic Applications |
Thom Wiggers |
|
Abstract
Getting a lot of CPU power used to be an expensive undertaking. Servers with many cores cost a lot of money and consume large amounts of energy. The developments in hardware for mobile devices has resulted in a surge in relatively cheap, powerful, and low-energy CPUs. In this paper we show how to build a low-energy, eighty-core cluster built around twenty ODROID-C2 development boards for under 1500 USD. The ODROID-C2 is a 46 USD microcomputer that provides a 1.536 GHz quad-core Cortex-A53-based CPU and 2 GB of RAM. We investigate the cluster’s application to cryptanalysis by implementing Pollard’s Rho method to tackle the Certicom ECC2K-130 elliptic curve challenge. We optimise software from the . technical report for the Cortex-A53. To do so, we show how to use microbenchmarking to derive the needed instruction characteristics which ARM neglected to document for the public. The implementation of the ECC2K-130 attack finally allows us to compare the proposed platform to various other platforms, including “classical” desktop CPUs, GPUs and FPGAs. Although it may still be slower than for example FPGAs, our cluster still provides a lot of value for money.
|
11 |
Generation of 8-Bit S-Boxes Having Almost Optimal Cryptographic Properties Using Smaller 4-Bit S-Boxes and Finite Field Multiplication |
Reynier Antonio de la Cruz Jiménez |
|
Abstract
Substitution Boxes (S-Boxes) as the only component of nonlinearity in modern ciphers, play a crucial role in the protection against differential, linear and algebraic attacks. The construction of S-Boxes with cryptographic properties close to optimal is an open problem. In this article we propose a new construction for generating such 8-bit permutations with nonlinearity up to a value of 108.
|
12 |
XHX – A Framework for Optimally Secure Tweakable Block Ciphers from Classical Block Ciphers and Universal Hashing |
Ashwin Jha,Eik List,Kazuhiko Minematsu,Sweta Mishra,Mridul Nandi |
|
Abstract
Tweakable block ciphers are important primitives for designing cryptographic schemes with high security. In the absence of a standardized tweakable block cipher, constructions built from classical block ciphers remain an interesting research topic in both theory and practice. Motivated by Mennink’s . publication from 2015, Wang et al. proposed 32 optimally secure constructions at ASIACRYPT’16, all of which employ two calls to a classical block cipher each. Yet, those constructions were still limited to .-bit keys and .-bit tweaks. Thus, applications with more general key or tweak lengths still lack support. This work proposes the . family of tweakable block ciphers from a classical block cipher and a family of universal hash functions, which generalizes the constructions by Wang et al. First, we detail the generic . construction with three independently keyed calls to the hash function. Second, we show that we can derive the hash keys in efficient manner from the block cipher, where we generalize the constructions by Wang et al.; finally, we propose efficient instantiations for the used hash functions.
|
13 |
Improved XKX-Based AEAD Scheme: Removing the Birthday Terms |
Yusuke Naito |
|
Abstract
Naito [ToSC 2017, Issue 2] proposed ., a tweakable blockcipher (TBC) based on a blockcipher (BC). It offers efficient authenticated encryption with associated data (AEAD) schemes with beyond-birthday-bound (BBB) security, by combining with efficient TBC-based AEAD schemes such as .. In the resultant schemes, for each data block, a BC is called once. The security bound is roughly ., where . is the block size of the BC in bits, . is the number of BC calls by a query, . is the number of queries, . is the number of BC calls handing associated data by encryption queries, and . is the number of BC calls by decryption queries. Hence, assuming ., the AEAD schemes achieve BBB security. However, the birthday terms ., . might become dominant, for example, when . is small such as . and when DoS attacks are performed. The birthday terms are introduced due to the modular proof via the .’s security proof..In this paper, in order to remove the birthday terms, we slightly modify . called ., and directly prove the security of . with .. We show that the security bound becomes roughly ..
|
|
|