The thermal camera can capture keyboard surface temperature change after a human's touch. This phenomenon may be used to steal users' passwords physically. In this paper, based on the study of thermal dynamics of keyboards, we design a password break system using an infrared thermal camera. First, we build a signal model to describe the dynamic process of temperature change on the keyboard using Newton's law of cooling. Next, we develop a maximum likelihood parameter estimation algorithm to estimate the keystroke time instants. Then, by maximizing the probability of key order arrangement, a novel password breaking algorithm is developed. Our algorithm is tested using simulated data as well as real-world data. Experiment results show that our algorithm is effective for physical password breaking using thermal characteristics. Based on our results, we discuss strategies for password protection at the end.
Additive manufacturing (AM, or 3D printing) is a novel manufacturing technology that has been adopted in industrial and consumer settings. However, the reliance of this technology on computerization has raised various security concerns. In this paper, we address issues associated with sabotage via tampering during the 3D printing process by presenting an approach that can verify the integrity of a 3D printed object. Our approach operates on acoustic side-channel emanations generated by the 3D printer's stepper motors, which results in a non-intrusive and real-time validation process that is difficult to compromise. The proposed approach constitutes two algorithms. The first algorithm is used to generate a master audio fingerprint for the verifiable unaltered printing process. The second algorithm is applied when the same 3D object is printed again, and this algorithm validates the monitored 3D printing process by assessing the similarity of its audio signature with the master audio fingerprint. To evaluate the quality of the proposed thresholds, we identify the detectability thresholds for the following minimal tampering primitives: insertion, deletion, replacement, and modification of a single tool path command. By detecting the deviation at the time of occurrence, we can stop the printing process for compromised objects, thus saving time and preventing material waste. We discuss various factors that impact the method, such as background noise, audio device changes, and different audio recorder positions.
1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
10 years of news and resources for members of the IEEE Signal Processing Society
In today’s big and messy data age, there is a lot of data generated everywhere around us. Examples include texts, tweets, network traffic, changing Facebook connections, or video surveillance feeds coming in from one or multiple cameras. Dimension reduction and noise/outlier removal are usually important preprocessing steps before any high-dimensional (big) data set can be used for inference. A common way to do this is via solving the principal component analysis (PCA) problemor its robust extensions. The basic PCA problem has been studied for over a century since the early work by Pearson in 1901 and Hotelling in 1933. The aim of PCA is to reduce the dimensionality of multivariate data while preserving as much of the relevant information as possible. It is often the first step in various types of exploratory data analysis, predictive modeling, and classification and clustering tasks, and finds applications in biomedical imaging, computer vision, process fault detection, recommendation systems’ design, and many more domains.
“PCA” refers to the following problem. Given a data set (a set of data vectors, or, more generally a set of data “tensors”) and a dimension k, find the k-dimensional subspace that “best” approximates the given data set. There are various notions of “best”; the traditional one used for classical PCA is either minimum Frobenius norm or minimum spectral norm of the approximation error of the data matrix. PCA, without constraints, and for clean data, is a solved problem. By the Eckart–Young–Mirsky theorem, computing the top k left singular vectors of the data matrix returns the PCA solution. On the other hand, robust PCA, which refers to the problem of PCA in the presence of outliers, is a much harder problem and one for which provably correct solutions have started appearing only recently. The same is true for dynamic PCA (subspace tracking or streaming PCA), dynamic or recursive robust PCA (robust subspace tracking), PCA and subspace tracking with missing data, and the related low-rank matrix completion problem, as well as for sparse PCA. Sparse PCA refers to the PCA problem when the principal components are assumed to be sparse. In fact, even the classical PCA problem with speed or memory constraints is not well understood.
The above issues have become particularly important for modern data sets because 1) the data matrix is often so large that it cannot be directly stored in the computer’s memory (need for streaming solutions); 2) a lot of data consist of missing entries and/or outliercorrupted entries (need for matrix completion and robust versions of PCA and subspace recovery); 3) a lot of data arrive sequentially, the data subspace itself may change over time, the entire data set cannot be stored but short batches can be, and decisions are often needed in real time or near real time (need for dynamic PCA and robust PCA and subspace tracking); 4) data are often distributed and stored over multiple locations and one needs to perform PCA without communicating all the data to a central location (need for distributed PCA); and 5) many types of data are better represented as a tensor data set rather than a vector data set or matrix (need for tensor PCA).
PCA and its many extensions are used everywhere as summarized above. Moreover, PCA is also a key intermediate or initialization step in solving many convex and nonconvex optimization problems. All areas of electrical engineering and computer science (EECS) have benefited hugely from, and have contributed significantly to, solutions of PCA and extensions. Most people in EECS know certain aspects of PCA, not all, and the special issue Rethinking PCA for Modern Data Sets: Theory, Algorithms, and Applications published by Proceedings of the IEEE in August 2018, helps bridge the gap in knowledge for EECS researchers from various backgrounds.
© Copyright 2019 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.