Im Modul "Foundations of Quantum Computing" haben wir uns mit den Grundlagen der Quantenmechanik und der Quanteninformatik beschäftigt. Im Folgenden fasse ich die wichtigsten Punkte zusammen.
Das Modul wurde auf Englisch gehalten, daher sind die folgenden Inhalte auf Englisch.
The most important mathematical foundations are summarized in the following cheat sheet:
Cheat Sheet – Mathematical Foundations of Quantum Computing
Standard Qubit States
|0⟩ =
(10)
|1⟩ =
(01)
Inner product [ab](cd)=a⋅c+b⋅d
Outer product examples:
∣0⟩⟨0∣=(1000)
∣0⟩⟨1∣=(0010)
∣1⟩⟨0∣=(0100)
∣1⟩⟨1∣=(0001)
Shortcut to calculating outer Products
Note that if you read the ket-bra as a binary number (e.g. ∣1⟩⟨0∣ as 2),
the binary number corresponds to the index in the matrix where the 1 is.
All others are 0.
Shortcut to calculating inner Products
Note that if the 2 numbers in the ket-bra are equal, the inner product returns 1, otherwise 0
⟨0∣0⟩=1
⟨0∣1⟩=0
⟨1∣0⟩=0
⟨1∣1⟩=1
Bra-ket (Inner Products)
Expression
Description
⟨1∥0⟩
(01)(10)
Bracket
bra (horizontal) - ket
Polar Basis States
∣+⟩=21(∣0⟩+∣1⟩)=[2121]=[21−21]
Pauli Matrices
X:
(0110)
Y:
(0i−i0)
Z:
(100−1)
Pauli Matrices as outer Products
X:
X=∣0⟩⟨1∣+∣1⟩⟨0∣=(0110)
Y:
Y=−i∣0⟩⟨1∣+i∣1⟩⟨0∣=(0i−i0)
Z:
Z=∣0⟩⟨0∣−∣1⟩⟨1∣=(100−1)
Hermitian Transpose
For z=a+bi, z∗=a−bi.
Transpose Matrix and complex conjugate its elements:
A=(acbd),A†=(a∗b∗c∗d∗)
(A∣y⟩)†=⟨y∣A†
(AB)†=B†A†
(aA+bB)†=a∗A†+b∗B†
Effects on the Basis States
Effects of the Pauli Matrices and the Hadamard gate on the basis states and polar basis states
Gate
Effect on |0⟩
Effect on |1⟩
Effect on |+⟩
Effect on |−⟩
X
|1⟩
|0⟩
|+⟩
-|-⟩
Y
i|1⟩
-i|0⟩
-i|−⟩
i|+⟩
Z
+|0⟩
-|1⟩
|-⟩
|+⟩
H
|+⟩
|−⟩
|0⟩
|1⟩
Kronecker Product
A⊗B=a11B⋮am1B…⋱…a1nB⋮amnB.
Associativity
Distributivity
Non-commutativity
Mixed product property:
(W⊗X)(Y⊗Z)=(WY)⊗(XZ)
Transpose and complex conjugate distribution:
(X⊗Y)T=XT⊗YT(X⊗Y)∗=X∗⊗Y∗
Hadamard Gate
H1=21(111−1)
(For higher dimensions: Hn=H2⊗Hn−1, etc.)
Also,
H=21(X+Z)
and
HXH=Z
Vector Space Axioms
⊕
Commutativity of addition
Associativity of addition
Neutral element (existence of a zero vector)
Inverse elements (existence of additive inverses)
⊗
Distributivity (over vector addition and scalar multiplication)
a⋅(x⊕y)=a⋅x⊕a⋅y
(a+b)⋅x=a⋅x⊕b⋅x
Scalar associativity (compatibility of scalar multiplication)
Unity of scalar multiplication (existence of multiplicative identity)
Key Matrix Definitions
Hermitian: A=A†
Hermitian matrices have real diagonal entries.
Eigenvalues of a Hermitian matrix are real.
Normal: AA†=A†A
Unitary: A†A=AA†=I
Eigenvalues have absolute value 1.
det(A) has absolute value 1.
Useful Facts
Trace(A) = sum of eigenvalues.
Det(A) = product of eigenvalues.
Rank(A) = maximum number of linearly independent rows (or columns).
Every idempotent matrix is a projection matrix (and vice versa).
Eigenvalues of an idempotent matrix are 0 or 1.
tr(A)=rank(A) if A is idempotent.
If A and B are idempotent, then AB is idempotent only if AB=BA.
If A and B are idempotent, then A+B is idempotent only if AB=−BA.
I−A idempotent, if A idempotent
Projection matrixP:
P=∣u⟩⟨u∣(if ∥u∥=1, projects onto span{u})
⟨u∣u⟩=1⇒∣u⟩⟨u∣ is idempotent
Satisfies P2=P.
Involutory Matrices
Involutory: A2=I.
Eigenvalues are ±1.
det(A)=±1.
A is idempotent ⇔2I+A is idempotent.
A is idempotent ⇔2I−A is idempotent.
Powers of A alternate between I and A.
A,B involutory ⇒A,B only involutory if AB=BA
Permutation Matrices
Permutation matrixP:
Square, entries in {0,1}.
Exactly one "1" in each row and column, others 0.
Sums of each row and each column are 1.
Orthogonal, unitary, and invertible.
If you multiply a permutation matrix by itself often enough, you eventually get the identity.
Product of two permutation matrices is another permutation matrix.
Eigenvectors and Eigenvalues
For matrix A, an eigenvectorx and eigenvalueλ satisfy
Ax=λx.
For a Hermitian A, λ is real.
For a Hermitian Matrix, its Eigenvectors are orthogonal
For a unitary A, ∣λ∣=1.
For an idempotent A, λ∈{0,1}.
For an involutory A, λ∈{±1}.
Permutation Matrices
Permutation matrixP:
Square, entries in {0,1}.
Exactly one "1" in each row and column, others 0.
Sums of each row and each column are 1.
Orthogonal, unitary, and invertible.
If you multiply a permutation matrix by itself often enough, you eventually get the identity.
Product of two permutation matrices is another permutation matrix.
The mentioned topics are based on concepts of linear algebra, which were assumed to be known. In the following I summarize the concepts learned in previous modules:
More or less relevant linear algebra concepts
Eigenvectors and Eigenvalues
For matrix A, an eigenvectorx and eigenvalueλ satisfy
Ax=λx.
Determinant of a Matrix
For a 2x2 Matrix
For a 2x2 matrix
A=(acbd),
the determinant is given by:
det(A)=ad−bc.
Using Cofactor Expansion
For an n×n matrix, the determinant can be calculated using cofactor expansion along any row or column. For example, expanding along the first row:
det(A)=j=1∑n(−1)1+ja1jdet(A1j),
where A1j is the (n−1)×(n−1) submatrix obtained by removing the first row and j-th column of A.
For example, for a 3x3 matrix
A=adgbehcfi,
the determinant is given by:
det(A)=aehfi−bdgfi+cdgeh.
Properties of Determinants
Multiplicative Property: det(AB)=det(A)det(B)
Transpose: det(AT)=det(A)
Inverse: If A is invertible, det(A−1)=det(A)1
Determinant of Identity Matrix: det(I)=1
Row Operations:
Swapping two rows multiplies the determinant by −1.
Multiplying a row by a scalar k multiplies the determinant by k.
Adding a multiple of one row to another row does not change the determinant.
Kernel, Image, and Related Concepts
Kernel (Null Space)
The kernel (or null space) of a matrix A, denoted as ker(A) or null(A), is the set of all vectors x such that Ax=0. In other words, it is the set of solutions to the homogeneous equation Ax=0.
Image (Column Space)
The image (or column space) of a matrix A, denoted as im(A) or col(A), is the set of all vectors that can be expressed as Ax for some vector x. It is the span of the columns of A.
Rank
The rank of a matrix A, denoted as rank(A), is the dimension of the image (column space) of A. It represents the maximum number of linearly independent columns of A.
Nullity
The nullity of a matrix A, denoted as nullity(A), is the dimension of the kernel (null space) of A. It represents the number of linearly independent solutions to the homogeneous equation Ax=0.
Rank-Nullity Theorem
The rank-nullity theorem states that for any m×n matrix A:
rank(A)+nullity(A)=n,
where n is the number of columns of A.
What follows are the rest of my notes that I did not have time to add in my Markdown file.
Worksheets
General Bit Manipulation
Calculate required bits for a binary number: ⌊log2(n)⌋+1
Convert a decimal number to binary: mod(n,2),div(n,2)
Convert a decimal number to binary grey code: base2(n(n>>1))*
Bitshift right by 1: equivalent to dividing by 2
Boolean Fourier Transform
Every pseudo-boolean function can be represented as a polynomial in the form of a sum of products of boolean variables:
f(x)=∑S⊆[n]ωS∏i∈Sxi=∑S⊆[n]ωSφS(x)=ωTφS(x)
They can easily be written as linear combinations of outer products:
I=∣0⟩⟨0∣+∣1⟩⟨1∣=(1001)
X=∣0⟩⟨1∣+∣1⟩⟨0∣=(0110)
Y=−i∣0⟩⟨1∣+i∣1⟩⟨0∣=(0i−i0)
Z=∣0⟩⟨0∣−∣1⟩⟨1∣=(100−1)
Because they are unitary, their Eigenvalues are of norm 1. Because they are Hermitian, their Eigenvalues are real.
Therefore, the Eigenvalues of the Pauli Matrices are ±1.
Because they are Hermitian, their Eigenvectors are orthogonal.
Their Eigenvectors are:
I: (10) and (01)
X: (11) and (1−1)
Z: (10) and (01)
Y: (1i) and (1−i)
Matrix Exponential
eiθX=cos(θ)I+isin(θ)X
Therefore,
RX(θ)=e−iθX=cos(θ)I−isin(θ)X
RY(θ)=e−iθY=cos(θ)I−isin(θ)Y
RZ(θ)=e−iθZ=cos(θ)I−isin(θ)Z
Linear Combinations of Pauli Matrices
Every matrix A∈C2×2 can be written as a linear combination of the Pauli Matrices.
This is because we can write the standard basis as a linear combination of the Pauli Matrices:
multiplying it with, say, 1000 gives you the first column of the matrix, which is [10].
You can also write C as:
C=F⊗nT+I⊗sT
because the kronecker product is linear.
Bilinearity of the Kronecker Product
We start with:
∣x⟩⊗∣y⟩=(j=1∑mxj∣uj⟩)⊗(k=1∑nyk∣vk⟩).
Distributing the terms:
=j=1∑mk=1∑n(xj∣uj⟩)⊗(yk∣vk⟩).
Which simplifies to:
=j=1∑mk=1∑nxjyk(∣uj⟩⊗∣vk⟩).
Tensor Products and the Born Rule
If we have two states ∣a⟩ and ∣b⟩
that obey the Born rule, then the tensor product of these states
also obeys the Born rule.
Kommentare
Noch Fragen?
Durch das Anmelden mit Github wird ein Login-Cookie erstellt und gespeichert. Sie können dies jederzeit rückgängig machen, indem Sie 'Cookie löschen' anklicken.