使用 pdf LaTeX 編譯,根據 Word Scientific 出版社格式進行排版之主要設定檔。以下附上範例章節的原始碼與編譯結果。
這本書的第一版已經出版,有興趣的人可以參考 Amazon 的書目資訊 http://www.amazon.com/Mathematical-Structures-Quantum-Mechanics-Chang/dp/9814366587
這本書的第一版已經出版,有興趣的人可以參考 Amazon 的書目資訊 http://www.amazon.com/Mathematical-Structures-Quantum-Mechanics-Chang/dp/9814366587
封面、序言、目錄與部份章節
|
|
參考文獻(手動)與 index
|
|
原始碼與註解:
\documentclass[11pt]{book}
\paperheight=9in
\paperwidth=6in
\textwidth=4.7in
\textheight=7in
\lineskip=13pt
\setlength{\topmargin}{-0.15in}
\setlength{\headheight}{0in}
\setlength{\headsep}{12pt}
\setlength{\footskip}{0.5in}
\setlength{\marginparsep}{0in}
\setlength{\marginparwidth}{0.75in}
\setlength{\oddsidemargin}{-0.25in}
\setlength{\evensidemargin}{-0.25in}
%設定頁面格式
\usepackage{bm, amsmath, amsfonts, amssymb, url}
%載入方程式粗體、ams 符號、字型與 url 套件
\usepackage{titlesec}
%標題格式
\usepackage{graphicx,subfigure}
%圖片、子圖套件
\usepackage{appendix, indentfirst, makeidx, array}
%附錄、首段落縮排與 index 處理套件
\usepackage{booktabs}
%表格套件
\usepackage{fancyhdr}
\pagestyle{fancy}
%頁首頁眉設定套件
\usepackage[cam,a4,center]{crop}
%設定裁切線,預印本校稿用,正式交稿會去掉 crop 參數
\renewcommand{\headrulewidth}{0.4pt}
\renewcommand{\sectionmark}[1]{\markright{\textit{\thesection~ #1}}}
\renewcommand{\leftmark}{\textit{Chapter}~\thechapter }
\fancyfoot{}
\fancyhead[RE]{\footnotesize\leftmark}
\fancyhead[LO]{\footnotesize\rightmark}
\fancyhead[LE,RO]{\thepage}
%設定頁首為頁碼靠外側,內側顯示章節名
\setcounter{tocdepth}{2}
%設定目錄列印至 subsection 層級
\renewcommand{\labelenumi}{(\alph{enumi})}
%將 enumerate 環境編號改為英文字母
\renewcommand{\arraystretch}{1.2}
%改變間距
\titleformat{\chapter}[display]{\Large\bf}{\LARGE{Chapter~\thechapter}}{2mm}{}
\titleformat{\section}{\large\bf}{\thesection}{1.5mm}{}
\titleformat{\subsection}{\bfseries\itshape}{\thesubsection}{1.5mm}{}
%改變章節標題格式
%WS 編輯要求 subsection 使用斜體格式,故使用 \itshape
%以下將自定義符號指令 self-define symbol
\newcommand{\proposition}[1]{\bigskip\noindent\fbox{\begin{minipage}{0.95\textwidth}~\begin{minipage}{0.95\textwidth}#1\end{minipage}\\\smallskip\end{minipage}}\bigskip\vline\vline\vline}
%定理框線
\newcommand{\excercise}{\addtocounter{subsection}{+1}
\subsubsection{Ex~\underline{\thesubsection}}}
%習題標題,這裡 exercise 拼錯
\newcommand{\stag}[1]{\tag{\arabic{chapter}.\arabic{equation}#1}}
%自定義方程式子編號,如 1.2a,b 等等
\newcommand{\qdo}[0]{\quad}
\newcommand{\qdt}[0]{\quad\quad}
\newcommand{\qdthr}[0]{\quad\quad\quad}
\newcommand{\qdf}[0]{\quad\quad\quad\quad}
\newcommand{\qdfiv}[0]{\quad\quad\quad\quad\quad}
%自定義不同長度空格指令,有些人很在意方程式的微調,所以事先定義幾個空白指令很方便
\newcommand{\norm}[1]{\|#1\|}
%\newcommand{\sdelta}{{\scriptstyle\Delta}}
\newcommand{\sdelta}{\Delta}
\newcommand{\bra}[1]{\langle#1|}
\newcommand{\ket}[1]{|#1\rangle }
\newcommand{\bket}[1]{\Big |#1\Big\rangle }
\newcommand{\braket}[2]{\langle#1|#2\rangle }
\newcommand{\proj}[2]{|#1\rangle\langle#2|}
\newcommand{\expect}[2]{\langle #1\rangle_{#2}}
\newcommand{\Kappa}[0]{\bm{\mathcal{K}}}
\newcommand{\parity}[0]{\bm{\mathcal{P}}}
\newcommand{\parityl}[0]{\bm{p}}
\newcommand{\charge}[0]{\bm{\mathcal{C}}}
%常用符號定義
\makeindex
%生成索引
\begin{document}
\abovedisplayskip=-6pt plus 3pt minus 0pt
\belowdisplayskip=8pt plus 3pt minus 0pt
\abovedisplayshortskip=-6pt plus 3pt minus 0pt
\belowdisplayshortskip=8pt plus 3pt minus 0pt
%縮小內定方程式與上下段落間距
\begin{titlepage}
\author{Kow Lung Chang\\\\
Physics Department, National Taiwan University}
\title{\LARGE\textsc{ Mathematical Structures\\ of \\Quantum Mechanics}}
\date{~}
\maketitle
\end{titlepage}
%編輯封面
\pagestyle{empty}
%消除首頁字碼
\frontmatter
%引言目錄以羅馬數字編號,歸入 frontmatter
~\\~\\~\\~\\~\\~\\~\\
\begin{center}
{\LARGE \sl TO FEZA AND SUHA}
\end{center}
%致謝頁
~~~
\newpage
\setcounter{page}{7}
%根據編輯要求重定頁碼(致謝頁之前另插入書籍資訊)
\pagestyle{plain}
\begin{raggedleft} {\LARGE \textbf{Preface}}\end{raggedleft}\\\\
%Preface 標題
During the past few years, after a couple of weeks of lecturing the course of quantum mechanics that I offered at the Physics Department, National Taiwan University, some students would usually come to ask me as to what extent they had to refurbish their mathematical background in order to follow my lecture with ease and confidence. It was hard for me to provide a decent and proper answer to the question, and very often students would show reluctance to invest extra time on subjects such as group theory or functional analysis when I advised them to take some advanced mathematics courses. All these experiences that I have encountered in my class eventually motivated me to write this book.\\
The book is designed with the hope that it might be helpful to those students I mentioned above. It could also serve as a complementary text in quantum mechanics for students of inquiring minds who appreciate the rigor and beauty of quantum theory.\\
Assistance received from many sources made the appearance of this book possible. I wish to express here my great appreciation and gratitude to Dr.\ Yusuf G\"{u}rsey, who painstakingly went through the manu- script and responded generously by giving very helpful suggestions and comments, and made corrections line by line. I would also like to thank Mr.\ Paul Black who provided me with cogent suggestions and criticism of the manuscript, particularly in those sections on quantum uncertainty. I am indebted as well to Mr.\ Chih Han Lin who, with immense patience, compiled the whole text and drew all the figures from my suggestions. All his hard work and attention resulted in the present form of this book.\\
\begin{flushleft}
{\it Taipei, Taiwan}\\
{\it March, 2011}\hfill Kow Lung Chang
\end{flushleft}
\newpage
\setcounter{page}{9}
\pagestyle{fancy}
\fancyhead[RE]{\footnotesize\rightmark}
%目錄列原為頁首 chapter 0 ,改為出現 Index
\pretolerance=10000
%編輯要求目錄不拆字,故縮小字體並加入 pretolerance 指令
\begin{small}
\tableofcontents
\end{small}
\pretolerance=100
%目錄編完後回復 pretolerance 設定
\mainmatter
\fancyhead[RE]{\footnotesize\leftmark}
\pagestyle{fancy}
%目錄編完後回復頁眉格式
\input{chap1.tex}
\input{chap2.tex}
\input{chap3.tex}
\input{chap4.tex}
\input{chap5.tex}
\backmatter
\newpage
\thispagestyle{empty}
\chapter{Bibliography}
%\begin{raggedleft} {\LARGE \textbf{Bibliography}}\end{raggedleft}\\\\%211
\begin{small}
\noindent Dirac, P. A. M., {\it The Principles of Quantum Mechanics}, 4th Edition, (Oxford University Press, London, 1958).\\[-8pt]
\noindent Feynman, R. P. and A. P. Hibbs, {\it Quantum Mechanics and Path Integrals}, (McGraw-Hill, Inc. 1965).\\[-8pt]
%手動編輯參考文獻,下略
\newpage
\thispagestyle{empty}
%\renewcommand{\chaptername}{~}
%%%%%%%%-----index--------------%%%%%
\fancyfoot{}
\fancyhead[RE]{\footnotesize\textit{index}}
\fancyhead[LO]{\footnotesize\textit{index}}
\fancyhead[LE,RO]{\thepage}
\cleardoublepage
%設定 index 頁眉格式
\addcontentsline{toc}{chapter}{Index}
\setlength{\columnsep}{20pt}
%加大 twocolumn 欄位間距
\begin{small}
\pretolerance=10000
%設定 index 排版不斷字
\printindex
\end{small}
\end{document}
chap1.tex 部份 latex 程式碼:
\chapter{Postulates and Principles of Quantum\\ Mechanics}
As with many fields in physics, a precise and rigorous description of a given subject requires the use of some mathematical tools. Take Lagrange's
formulation of classical mechanics for instance, one needs the basic knowledge of variational calculus in order to derive the equations of motion for a
system of particles in terms of generalized coordinates. To formulate the postulates of quantum mechanics, it would also be necessary to acquire some
knowledge on vector space in general, and Hilbert space in particular. It is in this chapter that we shall provide the minimum but essential mathematical
preparation that allows one to perceive and understand the general framework of quantum theory and to appreciate the rigorous derivation of the quantum principles.
\section{Vector space}\index{vector!vector space}\index{space!vector space}
A \textbf{vector space} $\mathcal{V}$ is a set of elements, called vectors with the following 2 operations:
\begin{itemize}
\item An operation of addition, which for each pair of vectors $\psi$ and $\phi$, corresponds to a new vector $ \psi +\phi \in \mathcal{V} $, called the sum of $\psi$ and $\phi$.
\item An operation of scalar multiplication, which for each vector $ \psi $ and a number $ a $, specifies a vector $ a\psi $, such that (assuming $ a,b$ are numbers and $ \psi , \phi $ and $ \chi $ are vectors)
~\\[-24pt]
\addtocounter{equation}{1}
\begin{align}
~&\psi +\phi=\phi +\psi,\tag{\arabic{chapter}.\arabic{equation}a}\\
~&\psi +(\phi +\chi)=(\psi +\phi )+\chi,\tag{\arabic{chapter}.\arabic{equation}b}\\
~&\psi +\mathit{0}=\psi, ~~\mathit{0}~\mbox{is null vector},\tag{\arabic{chapter}.\arabic{equation}c}~~~~~~~~~~~\\
~&a(\psi +\phi)=a\psi +a\phi,\tag{\arabic{chapter}.\arabic{equation}d} \\
~&(a+b)\psi=a\psi+b\psi,\tag{\arabic{chapter}.\arabic{equation}e} \\
~&a(b\psi)=(ab)\psi,\tag{\arabic{chapter}.\arabic{equation}f}\\
~&1\cdot\psi=\psi,\tag{\arabic{chapter}.\arabic{equation}g}\\
~&0\cdot \psi=\mathit{0},\tag{\arabic{chapter}.\arabic{equation}h}
\end{align}
\end{itemize}
\noindent where if $ a,b $ are real numbers, we call this vector space the real vector space, and denote it by $ \mathcal{V}_r $. On the other way, complex vector space\index{space!complex vector space}\index{space!complex vector space|seealso{$ \mathcal{C}^n $-space}} $ \mathcal{V}_c $ means $ a,b $ are complex numbers.
\subsubsection*{Example}
We take $ n $-dimensional Euclidean space,\index{space!Euclidean space}\index{space!Euclidean space|seealso{$ \mathcal{R}^n $-space}}\index{Euclidean space} $ \mathcal{R}^n $-space,\index{space!$ \mathcal{R}^n $-space} as an exmple. It is a vector space with the vectors $ \psi $ and $ \phi $ specified as $ \psi =(x_1, x_2,\ldots,$ $x_i, \ldots, x_n) $ and $ \phi =(y_1,y_2,\ldots,y_i,\ldots,y_n) $, where $ x_i $ and $ y_i $ ($ i=1,2,\ldots,$ $ n $) are all taken as real numbers. The sum of $ \psi $ and $ \phi $ becomes $ (x_1+y_1,x_2+y_2,\ldots,x_i+y_i,\ldots,x_n+y_n) $ and $ a\psi =(ax_1,ax_2,\ldots,ax_i,\ldots,ax_n) $. If $ a $ and $ x_i $ are taken as complex numbers, then $ \psi $ is a vector in $ \mathcal{C}^n$-space;\index{space!$ \mathcal{C}^n$-space} a $ n $-dimensional complex vector space.
It is easily understood that a set of the continuous functions $ f(x) $ for $ a\leqslant x\leqslant b$ forms a vector space, namely $ \mathcal{L}^2(a,b)$-space\index{space!$ \mathcal{L}^2(a,b)$-space}.
Before leaving this section, we also introduce some terminologies in the following subsections that will be
frequently referred to in later chapters.
\subsection{Linearly dependent and linearly independent}
\index{vector!linear dependent}\index{vector!linear independent}\index{linear dependent}\index{linear independent}
Consider a set of $ m $ vectors $ \{\psi_1,\psi_2,\ldots, \psi_m\} $, and we construct the linear combination of these $ m $ vectors as follows:
\begin{align}
\sum_{i=1}^m a_i\psi_i.~~~~~~~~~~~~~~~
\end{align}
\noindent This linear combination of $ m $ vectors is of course a vector. It becomes a \textbf{null vector}\index{vector!null vector}\index{null vector} if and only if all the coefficient $ a_i =0$ for $ i=1,2,\ldots, m $, then the set of $ m $ vectors $ \{\psi_1,\psi_2,\ldots, \psi_m\} $ is called linearly independent. If at least one of the coefficient $ a_l \neq 0$ such that $ \sum_{i=1}^ma_i\psi_i=\mathit{0} $, then the set $ \{\psi_1,\psi_2,\ldots, \psi_m\} $ is called linearly dependent.
\subsection{Dimension and basis}
The maximum number of linearly independent vectors in $ \mathcal{V} $ is called the \textbf{dimension}\index{dimension}\index{dimension!of vector space}\index{space!dimension of vector space} of $ \mathcal{V} $. Any $n$-linearly independent vectors in $ n $-dimensional vector space $ \mathcal{V} $ form the \textbf{basis}\index{basis}\index{vector!basis of} of the vector space.
\section{Inner product}
An \textbf{inner product}\index{inner product}, or sometimes called \textbf{scalar product}\index{scalar product} in vector space, is a numerically valued function of the ordered pair of vectors $ \psi $ and $ \phi $, denoted by $ (\psi,\phi) $, and for a scalar $ a $, such that
\addtocounter{equation}{1}
\begin{align}
&(\psi,\phi+\chi)=(\psi,\phi)+(\psi,\chi), \tag{\arabic{chapter}.\arabic{equation}a} \\
&(\psi,a\phi)=a(\psi,\phi),\tag{\arabic{chapter}.\arabic{equation}b}\\
&(\psi,\phi)=(\phi,\psi)^*,\tag{\arabic{chapter}.\arabic{equation}c}\\
&(\psi,\psi)\geqslant 0, (\psi,\psi)=0 ~\mbox{if and only if}~\psi ~\mbox{is a null vector}.\tag{\arabic{chapter}.\arabic{equation}d}
\end{align}
Two vectors $ \psi $ and $ \phi $ are said to be orthogonal to each other if their corresponding inner product vanishes, namely $ (\psi, \phi)=0 $.
For example, let us consider the vectors in $ \mathcal{C}^n $-space $ \psi=(x_1,x_2,\ldots,$ $ x_n) $ and $ \phi=(y_1,y_2,\ldots, y_n) $ where $ x_i $ and $ y_i $ are complex numbers. The inner product of $ \psi $ and $ \phi $ written as
\begin{align}
\displaystyle(\psi,\phi)=\sum_{i=1}^n x_i^*y_i=x_1^*y_1+x_2^*y_2+\cdots +x_n^*y_n.
\end{align}
Consider the set of continuous function of $ f(x) $ where $ a\leqslant x\leqslant b $. An ordered pair of functions $ f(x) $ and $ g(x) $ define the inner product as\\[-11pt]
$ (f(x),g(x))=\int_a^bf(x)^*g(x)dx $. This vector space is called $ \mathcal{L}^2(a,b) $-space\\[+4pt] when $ |f(x)|^2 $ and $ |g(x)|^2 $ = finite.\index{space!$ \mathcal{L}^2(a,b)$-space}
\subsection{Schwarz inequality}
We are now in the position to prove the Schwarz inequality.
Let $ \psi $ and $ \phi $ be any two vectors. The \textbf{Schwarz inequality}\index{Schwarz inequality} reads as
\begin{align}
|(\psi,\phi)|=\sqrt{(\psi,\phi)(\phi,\psi)}\leqslant\sqrt{(\psi,\psi)}\sqrt{(\phi,\phi)}.
\end{align}
\subsubsection*{Proof}
Since $ (\psi+\alpha\phi,\psi+\alpha\phi)\geqslant 0 $, where $ \alpha =\xi+i\eta $ is a complex number. Regard this inner product $ (\psi+\alpha\phi,\psi+\alpha\phi)=f(\xi, \eta) $ as a function of two variables $ \xi $ and $ \eta $. Then
\begin{align}
f(\xi, \eta)=(\psi,\psi)+|\alpha|^2(\phi,\phi)+\alpha (\psi,\phi)+\alpha^* (\phi,\psi),
\end{align}
\noindent which is positive definite.
Let us look for the minimum of $ f(\xi, \eta) $ at $ \xi_0, \eta_0 $ by solving
\begin{align}
\frac{\partial f(\xi, \eta)}{\partial \xi}\Bigg|_{\xi_0,\eta_0}=\frac{\partial f(\xi, \eta)}{\partial \eta}\Bigg|_{\xi_0,\eta_0}=0,
\end{align}
\noindent and we obtain
\begin{align}
\xi_0=\frac{1}{2}\frac{(\psi,\phi)+(\phi,\psi)}{(\phi,\phi)},~~~~
\eta_0=-\frac{i}{2}\frac{(\psi,\phi)-(\phi,\psi)}{(\phi,\phi)}.
\end{align}
\noindent Therefore
\begin{align}
f(\xi_0,\eta_0)=(\psi,\psi)-\frac{(\psi,\phi)(\phi,\psi)}{(\phi,\phi)}\geqslant 0,
\end{align}
\noindent that can be cast into the familiar expression of Schwarz inequality.\index{Schwarz inequality}
\subsection{Gram-Schmidt orthogonalization process}
\index{Gram-Schmidt orthogonalization}
The inner product we have been considering can be applied to the orthogonalization of the basis in the $ n $-dimensional vector space. Let $\{ \psi_1, \psi_2,$ $\ldots , \psi_n \} \in \mathcal{V}$ be the set of $ n $-linearly independent vectors. Since $(\psi_i,\psi_j)\neq 0 $ in general, we can construct a new set of vectors $\{\psi_1',\psi_2',\ldots,\psi_n'\}$ such that
$(\psi_i',\psi_j')= 0 $ for all $ i $ and $ j $ unless $ i=j $, namely $ \psi_i' $ and $ \psi_j' $ are orthogonal to each other for $ i\neq j $ by the following procedure:
First take $ \psi_1'=\psi_1 $ and construct $ \psi_2'=\psi_2+\alpha\psi_1' $. In order to force $ \psi_2' $ to be orthogonal to $ \psi_1' $, we solve the $ \alpha $ as to meet the condition $ (\psi_2',\psi_1')=0 $, i.e.
\begin{align}
(\psi_2',\psi_1')=(\psi_2,\psi_1')+\alpha^*(\psi_1',\psi_1')=0,
\end{align}
\noindent and we obtain $ \alpha=-(\psi_2,\psi_1')^*/(\psi_1',\psi_1')=-(\psi_1',\psi_2)/(\psi_1',\psi_1') $, hence
\begin{align}
\psi_2'=\psi_2-\psi_1'\frac{(\psi_1',\psi_2)}{(\psi_1',\psi_1')}.~~~~~~
\end{align}
The same procedure can be performed repeatedly to reach $ \psi_3'=\psi_3+\alpha\psi_2'+\beta\psi_1' $ which guarantees $ (\psi_3',\psi_1')=(\psi_3',\psi_2')=0 $ with $ \alpha = -(\psi_2',\psi_3) / (\psi_2',\psi_2') $ and $ \beta = -(\psi_1',\psi_3)/ (\psi_1',\psi_1') $. In general,
\begin{align}
\notag\psi_i'=\psi_i-\psi_{i-1}'\frac{(\psi_{i-1}',\psi_i)}{(\psi_{i-1}',\psi_{i-1}')}- \psi_{i-2}'\frac{(\psi_{i-2}',\psi_i)}{(\psi_{i-2}',\psi_{i-2}')}-\cdots-\psi_{1}'\frac{(\psi_{1}',\psi_i)}{(\psi_{1}',\psi_{1}')}.\\[-20pt]
~
\end{align}
The set of orthogonal basis $\{\psi_1',\psi_2',\ldots,\psi_n'\}$ can be normalized immediately by multiplying the inverse square root of the corresponding inner product, i.e.
\begin{align}
\tilde{\psi_i}=\frac{\psi_i'}{\sqrt{(\psi_i',\psi_i')}},~~~~~~~~
\end{align}
\noindent and $\{\tilde{\psi_1},\tilde{\psi_2},\ldots,\tilde{\psi_n}\}$ becomes the \textbf{orthonormal}\index{orthonormal}\index{vector!orthonormal set of} set of the basis in the vector space. From now on we shall take the basis to be orthonormal without mentioning it particularly.
\subsubsection*{Example}
Consider the following set of continuous functions in $ \mbox{C}(-\infty,\infty) $
\begin{align}
f_n(x)=x^n\exp\left(-\frac{x^2}{2}\right),~~~n=0,1,\ldots.~~~~~~~~
\end{align}
\noindent We construct the new set of orthogonal vectors\index{vector!orthogonal vectors}\index{orthogonal vectors} by applying the Gram-Schmidt process and obtain:
\begin{align}
f_0'(x)&=f_0(x)=\exp\left(-\frac{x}{2}\right),\\
f_1'(x)&=f_1-\frac{f_0'(f_0',f_1)}{(f_0',f_0')}=f_1(x)=x\exp\left(\frac{x^2}{2}\right),\\
f_2'(x)&=f_2-\frac{f_1'(f_1',f_2)}{(f_1',f_1')}-\frac{f_0'(f_0',f_2)}{(f_0',f_0')}=\left(x^2-\frac{1}{2}\right)\exp\left(-\frac{x^2}{2}\right).
\end{align}
\noindent Similarly we have $ f_3'(x)=(x^3-3x/2)\exp(-x^2/2) $. The orthonormal functions can be calculated according to
\begin{align}
\tilde{f_n}(x)=\frac{f_n'(x)}{\sqrt{(f_n'(x),f_n'(x))}}=\frac{1}{\sqrt{2^nn!\sqrt{\pi}}}\exp\left(-\frac{x^2}{2}\right)H_n(x),
\end{align}
\noindent where $ H_n(x) $ are called Hermite polynomials. One also recognizes that $ \tilde{f_n}(x) $ are in fact, the eigenfunctions of the Schr$ \ddot{\mbox{o}}$dinger equation for one-dimension harmonic oscillation.
\section{Completeness and Hilbert space}
\index{completeness}\index{Hilbert space}\index{space!Hilbert space}
Let us introduce some other terminologies in discussing Hilbert space.
\subsection{Norm}
A \textbf{norm}\index{norm} on a vector space is a non-negative real function such that, if $ \psi , \phi$ are vectors, the norm of $ \psi $ is written as $ \norm{\psi} $, satisfying:
~\\[-24pt]
\addtocounter{equation}{1}
\begin{align}
&\norm{\psi}\geqslant 0,~~~~~ \norm{\psi}=0 ~~~\mbox{iff}~~~ \psi~ \mbox{is null vector,}\tag{\arabic{chapter}.\arabic{equation}a}\\
&\norm{a\psi}=|a|\cdot\norm{\psi},\tag{\arabic{chapter}.\arabic{equation}b}\\
&\norm{\psi +\phi}\leqslant \norm{\psi}+\norm{\phi}.\tag{\arabic{chapter}.\arabic{equation}c}
\end{align}
\subsubsection{Example}
If $ f(x)\in\mbox{C}(a,b) $, namely if $ f(x) $ is a continuous function for a variable that lies between $ a $ and $ b $, the norm of $ f(x) $ can be defined either as $ \norm{f(x)}=\mbox{Max}\{|f(x)|,a\leqslant x\leqslant b\} $ or as the inner product of $ f(x) $, i.e.
$$ \norm{f(x)}^2=(f(x),f(x))=\int_a^b|f(x)|^2dx. $$
\subsection{Cauchy sequence and convergent sequence}
Consider an infinite dimensional vector space and denote the basis by $ \{\phi_1,\phi_2,\phi_3,\ldots \} $. We construct the partial sum $ \psi_N=\sum_ia_i\phi_i $, where $ i $ runs from $ 1 $ to $ N $, and obtain $ \ldots,\psi_j,\psi_{j+1},\ldots ,\psi_m,\psi_{m+1},\ldots, \psi_n,\ldots $ for increasing values in $ N $ that forms an infinite sequence. The sequence is called a \textbf{Cauchy sequence}\index{Cauchy sequence} if
$$
\lim_{n\rightarrow \infty} \psi_n=\lim_{m\rightarrow \infty} \psi_m,~~~~~~~~
$$
or more precisely to put in terms of norm, i.e, $ \displaystyle\lim_{n,m\rightarrow \infty}\norm{\psi_n-\psi_m}=0 $.
It is said that a vector $ \psi_m $ converges to $ \psi $ if
$$
\lim_{m\rightarrow\infty}\psi_m =\psi, ~~~\mbox{or}~~~ \lim_{m\rightarrow \infty}\norm{\psi_m-\psi}=0,
$$
\noindent then $ \{\ldots,\psi_{m-1},\psi_{m},\ldots\} $ is called a \textbf{convergent sequence}\index{convergent sequence}.
It is easily concluded that every convergent sequence is a Cauchy sequence. Yet it is not necessary true conversely. Namely a Cauchy sequence is not always a convergent sequence.
\subsection{Complete vector space}
\index{completeness!complete vector space}\index{space!complete vector space}
A vector space, in which every Cauchy sequence of a vector $ \psi_m $ converges to a limiting vector $ \psi $, is called a complete vector space.
\subsection{Hilbert space}
\index{Hilbert space}\index{space!Hilbert space}
A Hilbert space is a complete vector space with norm defined as the inner product. A Hilbert space, finite dimensional or infinite dimensional, is separable if its basis is countable.
\section{Linear operator}
A \textbf{linear operator}\index{operator!linear operator} $ \mathbf{A} $ on a vector space assigns to each vector $ \psi $ a new vector, i.e. $ \mathbf{A}\psi=\psi' $ such that
\begin{align}
\mathbf{A}(\psi+\phi)=\mathbf{A}\psi+\mathbf{A}\phi,~~~\mathbf{A}(\alpha\psi)=\alpha\mathbf{A}\psi .~~~~~~~~
\end{align}
Two operators $ \mathbf{A} $, $ \mathbf{B} $ are said equal if $ \mathbf{A}\psi=\mathbf{B}\psi $ for all $ \psi $ in the vector space.
For convenience in later discussion, we denote
\begin{itemize}
\item $ \mathbf{O} $: null operator\index{operator!null operator} such that $ \mathbf{O}\psi=\mathit{0} $ for all $ \psi $, and $ \mathit{0} $ is the null vector.
\item $ \mathbf{I} $: unit operator\index{operator!unit operator} or identity operator\index{operator!identity operator}\index{identity!identity operator} such that $ \mathbf{I}\psi=\psi$.
\end{itemize}
The sum of the operators $ \mathbf{A} $ and $ \mathbf{B} $ is an operator, such that $ (\mathbf{A}+\mathbf{B})\psi=\mathbf{A}\psi+\mathbf{B}\psi $. The product of operators $ \mathbf{A} $ and $ \mathbf{B} $ is again an operator that one writes as $ \mathbf{A}\cdot\mathbf{B} $ or $ \mathbf{A}\mathbf{B} $ such that $ (\mathbf{A}\mathbf{B})\psi=\mathbf{A}(\mathbf{B}\psi) $.
The order of the operators in the product matters greatly. It is generally that $ \mathbf{A}\mathbf{B}\neq\mathbf{B}\mathbf{A} $. The associative rule holds for the product of the operators $ \mathbf{A}(\mathbf{B}\mathbf{C})=(\mathbf{A}\mathbf{B})\mathbf{C} $.
\subsection{Bounded operator}
An operator $ \mathbf{A} $ is called a \textbf{bounded operator}\index{operator!bounded operator} if there exists a positive number $ b $ such that
$$
\norm{\mathbf{A}\psi}\leqslant b\norm{\psi}, ~~~\mbox{for any vector}~\psi ~\mbox{in the vector space.}
$$
The least upperbound (supremum) of $ \mathbf{A} $, namely the smalleast number of $ b $ for a given operator $ \mathbf{A} $ and for any $ \psi $ in $ \mathcal{V} $, is denoted by
\begin{align}
\norm{\mathbf{A}}=\mbox{sup}\left\{\frac{\norm{\mathbf{A}\psi}}{\norm{\psi}},~~~\psi\neq\mathit{0}\right\},~~~\qdfiv
\end{align}
\noindent then $ \norm{\mathbf{A}\psi}\leqslant\norm{\mathbf{A}}\norm{\psi} $.\\[+5pt]
We are now able to show readily that $ \norm{\mathbf{A}+\mathbf{B}} \leqslant \norm{\mathbf{A}}+\norm{\mathbf{B}} $.
\subsubsection*{Proof}
Let us denote $ \norm{\mathbf{A}\psi}\leqslant\norm{\mathbf{A}}\norm{\psi} $ and $ \norm{\mathbf{B}\psi}\leqslant\norm{\mathbf{B}}\norm{\psi} $. Then
\begin{align*}
\norm{\mathbf{A}+\mathbf{B}}&=\mbox{sup}\left\{\frac{\norm{(\mathbf{A}+\mathbf{B})\psi}}{\norm{\psi}},\psi\neq\mathit{0}\right\}=\mbox{sup}\left\{\frac{\norm{\mathbf{A}\psi+\mathbf{B}\psi}}{\norm{\psi}},\psi\neq\mathit{0}\right\}\\
&\leqslant \mbox{sup}\left\{\frac{\norm{\mathbf{A}\psi}}{\norm{\psi}},\psi\neq\mathit{0}\right\}+\mbox{sup}\left\{\frac{\norm{\mathbf{B}\psi}}{\norm{\psi}},\psi\neq\mathit{0}\right\}=\norm{\mathbf{A}}+\norm{\mathbf{B}}.
\end{align*}
Similarly, we have $ \norm{\mathbf{A}\mathbf{B}}\leqslant \norm{\mathbf{A}} \norm{\mathbf{B}} $.
\subsection{Continuous operator}
Consider the convergent sequence $ \{\ldots,\psi_m,\psi_{m+1},\ldots,\psi_n,\ldots \}$ such that $ \displaystyle\lim_{n\rightarrow \infty}\norm{\psi_n-\psi}=0 $. If $ \mathbf{A} $ is a bounded operator, then $ \{\ldots,\mathbf{A}\psi_m,$ $\mathbf{A}\psi_{m+1},$ $\ldots,\mathbf{A}\psi_n,\ldots\}$ is also a convergent sequence because
$$ \displaystyle\lim_{n\rightarrow \infty}\norm{\mathbf{A}\psi_n-\mathbf{A}\psi}\leqslant\norm{\mathbf{A}}\lim_{n\rightarrow \infty}\norm{\psi_n-\psi}=0. $$
\noindent We call operator $ \mathbf{A} $ the \textbf{continuous operator}\index{operator!continuous operator}.
\subsection{Inverse operator}
An operator $ \mathbf{A} $ has an \textbf{inverse operator}\index{operator!inverse operator} if there exists $ \mathbf{B}_R $ such that $ \mathbf{A}\mathbf{B}_R=\mathbf{I} $, then we call operator $ \mathbf{B}_R $ the right inverse of $ \mathbf{A} $. Similarly an operator $ \mathbf{B}_L $ such that the product operator $ \mathbf{B}_L\mathbf{A}=\mathbf{I} $, then we call operator $ \mathbf{B}_L $ the left inverse of $ \mathbf{A} $. In fact, the left inverse operator is always equal to the right inverse operator for a given operator $ \mathbf{A} $, because
\begin{align}
\mathbf{B}_L=\mathbf{B}_L\mathbf{I}=\mathbf{B}_L(\mathbf{A}\mathbf{B}_R) = (\mathbf{B}_L\mathbf{A})\mathbf{B}_R=\mathbf{I}\mathbf{B}_R=\mathbf{B}_R.
\end{align}
The inverse operator of a given operator $ \mathbf{A} $ is also unique. If operators $ \mathbf{B} $ and $ \mathbf{C} $ are all inverse operators of $ \mathbf{A} $, then $ \mathbf{C}=\mathbf{C}\mathbf{I}=\mathbf{C}(\mathbf{A}\mathbf{B})=(\mathbf{C}\mathbf{A})\mathbf{B}=\mathbf{B} $.
The implication of uniqueness of the inverse operator of operator $ \mathbf{A} $ allows us to write it in the form $ \mathbf{A}^{-1} $, namely $ \mathbf{A}\mathbf{A}^{-1}=\mathbf{A}^{-1}\mathbf{A}=\mathbf{I} $. It is easily verified that $ (\mathbf{A}\mathbf{B})^{-1}=\mathbf{B}^{-1}\mathbf{A}^{-1} $.
\subsection{Unitary operator}\index{operator!unitary operator}\index{unitary!unitary operator}
An operator $ \mathbf{U} $ is unitary if $ \norm{\mathbf{U}\psi}=\norm{\psi} $. A unitary operation preserves the invariant of the inner product of any pair of vectors, i.e. $ (\mathbf{U}\psi, \mathbf{U}\phi)=(\psi,\phi) $. This can be proved as follows:
Let $ \chi=\psi+\phi $ and we have
\begin{align*}
(\mathbf{U}\chi,\mathbf{U}\chi)&=(\mathbf{U}(\psi+\phi),\mathbf{U}(\psi+\phi))\\
&=(\mathbf{U}\psi,\mathbf{U}\psi)+(\mathbf{U}\psi,\mathbf{U}\phi)+(\mathbf{U}\phi,\mathbf{U}\psi)+(\mathbf{U}\phi,\mathbf{U}\phi)\\[-10pt]
&=\norm{\mathbf{U}\psi}^2+\norm{\mathbf{U}\phi}^2+2\Re\{(\mathbf{U}\psi,\mathbf{U}\phi)\},
\end{align*}
\noindent and on the other hand,
\begin{align*}
(\mathbf{U}\chi,\mathbf{U}\chi)&=(\psi+\phi,\psi+\phi)\\
&=(\chi,\chi)=(\psi,\psi)+(\psi,\phi)+(\phi,\psi)+(\phi,\phi)\\[-10pt]
&=\norm{\psi}^2+\norm{\phi}^2+2\Re\{(\psi,\phi)\}.
\end{align*}
Since $ \norm{\mathbf{U}\psi}=\norm{\psi},\norm{\mathbf{U}\phi}=\norm{\phi} $, we have $ \Re\{(\mathbf{U}\psi,\mathbf{U}\phi)\}=\Re\{(\psi,\phi)\} $. Similarly if $ \chi'=\psi+i\phi $, we obtain $ (\mathbf{U}\chi',\mathbf{U}\chi')=(\chi',\chi' ) $, that implies $ \Im\{(\mathbf{U}\psi,\mathbf{U}\phi)\}=\Im\{(\psi,\phi)\} $, therefore $ (\mathbf{U}\psi, \mathbf{U}\phi)=(\psi,\phi) $.
\subsection{Adjoint operator}
Consider the inner product of $ (\psi,\mathbf{A}\phi) $ where $ \mathbf{A} $ is a given linear operator of interest. This numerically scalar quantity certainly is a function of operator $ \mathbf{A} $ and the pair of vectors $ \psi $ and $ \phi $, namely $ (\psi,\mathbf{A}\phi)=F(\mathbf{A},\psi,\phi) $ is a scalar quantity.
Instead of performing the above inner product straightforwardly, we shall obtain the very same scalar of $ (\psi,\mathbf{A}\phi) $ by forming the following inner product $ (\mathbf{A}^\dag\psi,\phi) $ such that $ (\psi,\mathbf{A}\phi)\equiv (\mathbf{A}^\dag\psi,\phi) $. The operator $ \mathbf{A}^\dag $ is called the \textbf{adjoint operator}\index{operator!adjoint operator} of $ \mathbf{A} $. The following relations can be easily established (proofs left to readers):
\index{adjoint conjugate}\index{conjugate!adjoint conjugate}
\addtocounter{equation}{1}
\begin{align}
&(\mathbf{A}+\mathbf{B})^\dag=\mathbf{A}^\dag+\mathbf{B}^\dag, &\quad\quad\quad\tag{\arabic{chapter}.\arabic{equation}a}\\
&(\alpha\mathbf{A})^\dag=\alpha^*\mathbf{A}^\dag, &\quad\quad\quad\tag{\arabic{chapter}.\arabic{equation}b}\\
&(\mathbf{A}\mathbf{B})^\dag=\mathbf{B}^\dag\mathbf{A}^\dag, &\quad\quad\quad\tag{\arabic{chapter}.\arabic{equation}c}\\
&(\mathbf{A}^\dag)^\dag=\mathbf{A}, &\quad\quad\quad\tag{\arabic{chapter}.\arabic{equation}d}\\
&(\mathbf{A}^\dag)^{-1}=(\mathbf{A}^{-1})^\dag. &\quad\quad\quad\tag{\arabic{chapter}.\arabic{equation}e}
\end{align}
It can also be shown that $ \mathbf{A}^\dag $ is a bounded operator if $ \mathbf{A} $ is bounded and their norms are equal, i.e. $ \norm{A}=\norm{A^\dag} $.
To prove the above equality, let us consider $ \norm{\mathbf{A}^\dag\psi}^2=(\mathbf{A}^\dag\psi,\mathbf{A}^\dag\psi) $, namely\\[-8pt]
\begin{align*}
\norm{\mathbf{A}^\dag\psi}^2=(\mathbf{A}^\dag\psi,\mathbf{A}^\dag\psi)=(\mathbf{A}\mathbf{A}^\dag\psi,\psi)
\leqslant \norm{\psi}\norm{\mathbf{A}\mathbf{A}^\dag\psi}\leqslant
\norm{\psi}\norm{\mathbf{A}}\norm{\mathbf{A}^\dag\psi},
\end{align*}
\noindent therefore $ \norm{\mathbf{A}^\dag\psi}\leqslant\norm{\mathbf{A}}\norm{\psi} $, and we have $ \norm{\mathbf{A}^\dag}\leqslant\norm{\mathbf{A}} $.
On the other hand, we have $ \norm{\mathbf{A}\psi}^2=(\mathbf{A}\psi,\mathbf{A}\psi)=(\mathbf{A}^\dag\mathbf{A}\psi,\psi)\leqslant\norm{\psi}\norm{\mathbf{A}^\dag}\norm{\mathbf{A}\psi}, $ which implies $ \norm{\mathbf{A}}\leqslant\norm{\mathbf{A}^\dag} $. Therefore
$ \norm{A}=\norm{A^\dag} $ is established.
\subsection{Hermitian operator}
When an operator is self-adjoint, namely an adjoint operator $ \mathbf{A}^\dag $ equals to operator $ \mathbf{A} $ itself, i.e. $ \mathbf{A}=\mathbf{A}^\dag $, then we call $ \mathbf{A} $ a \textbf{Hermitian operator}\index{operator!Hermitian operator}\index{Hermitian!Hermitian operator}.
\subsection{Projection operator}
Let $ \mathcal{H} $ be a Hilbert space in which we consider a \textbf{subspace}\index{subspace}\index{space!subspace} $ \mathcal{M} $ and its orthogonal complement space\index{space!orthogonal complement space} $ \mathcal{M}_\perp $ such that for each vector $ \psi $ in $ \mathcal{H}=\mathcal{M}\oplus\mathcal{M}_\perp $ that are decomposed into unique vectors $ \psi_{\mathcal{M}} $ in $ \mathcal{M} $ and $ \psi_{\mathcal{M}_\perp} $ in $ \mathcal{M}_\perp $ such that $ \psi=\psi_{\mathcal{M}}+\psi_{\mathcal{M}_\perp} $, and $ (\psi_{\mathcal{M}},\psi_{\mathcal{M}_\perp})=0 $.
The projection operator $ \mathbf{P}_\mathcal{M} $ when acting upon vector $ \psi $ onto a subspace results in $ \mathbf{P}_\mathcal{M}\psi=\psi_{\mathcal{M}} $. It is obvious that $ \mathbf{P}_\mathcal{M}\psi=\psi $ if $ \psi\in\mathcal{M} $ and $ \mathbf{P}_\mathcal{M}\psi=0 $ if $ \psi\in\mathcal{M}_\perp $.
One can also be easily convinced that
\begin{align*}
(\psi,\mathbf{P}_\mathcal{M}\phi)&=(\psi,\phi_{\mathcal{M}})=(\psi_{\mathcal{M}}+\psi_{\mathcal{M}\perp},\phi_{\mathcal{M}})=(\psi_\mathcal{M},\phi_{\mathcal{M}})\\
&=(\psi_\mathcal{M},\phi)=(\mathbf{P}_\mathcal{M}\psi_\mathcal{M},\phi)=(\mathbf{P}_\mathcal{M}\psi,\phi).
\end{align*}
\noindent Therefore $ \mathbf{P}_\mathcal{M} $ is also a Hermitian operator, i.e. $ \mathbf{P}_\mathcal{M}^\dag=\mathbf{P}_\mathcal{M} $.
Similarly we define $ \mathbf{P}_{\mathcal{M}_\perp} $ such that $ \mathbf{P}_{\mathcal{M}_\perp}\psi=\psi_{\mathcal{M}_\perp} $ and the sum of $ \mathbf{P}_{\mathcal{M}} $ and $ \mathbf{P}_{\mathcal{M}_\perp} $ becomes an identity operator, i.e.
$$
\mathbf{P}_{\mathcal{M}}+\mathbf{P}_{\mathcal{M}_\perp}=\mathbf{I}. ~~~~~~
$$
\subsection{Idempotent operator}
The \textbf{projection operator}\index{operator!projection operator} is an \textbf{idempotent operator}\index{operator!idempotent operator}\index{idempotent operator }, namely $ \mathbf{P}_{\mathcal{M}}^2=\mathbf{P}_\mathcal{M} $ because
$\mathbf{P}_{\mathcal{M}}^2\psi=\mathbf{P}_\mathcal{M}\psi_\mathcal{M}=\mathbf{P}_\mathcal{M}\psi$.
\section{The postulates of quantum mechanics}\index{dynamical observable}\index{postulates of quantum mechanics}\index{Hilbert space}\index{space!Hilbert space}
We start to formulate the postulates of quantum mechanics. We shall treat the first three postulates in this chapter, and leave the 4th postulate for the next chapter when we investigate the time evolution of a quantum system.\index{time evolution}
\proposition{
\subsection*{\bfseries\itshape\emph 1\begin{small}{st}\end{small} postulate of quantum mechanics:}\index{postulates of quantum mechanics!1st postulate of}
{\it For every physical system, there exists an abstract entity, called the state\index{state}\index{state!physical state} (or the state function\index{function!state function} or wave function\index{function!wave function}\index{wave!wave function}\index{vector!state vector} that shall be discussed later), which provides the information of the dynamical quantities of the system; such as coordinates, momenta, energy, angular momentum, charge or isospin, etc. All the states for a given physical system are elements of a Hilbert space, i.e.}
~\\
\begin{center}
\begin{tabular}{ccc}
\bottomrule[1.5pt]
{\it physical system}&$\longleftrightarrow$& \it Hilbert space $ \mathcal{H} $\\
\it physical state&$\longleftrightarrow$& \it state vector\index{vector!state vector} $ \psi $ in $ \mathcal{H} $\\[-15pt]
&&\\
\toprule[1.5pt]
\end{tabular}
\end{center}
{\it Furthermore for each physical observable\index{physical observable}, such as the 3\begin{footnotesize}{\it rd}\end{footnotesize} component of the angular momentum or the total energy of the system and so forth, there associates a unique Hermitian operator\index{operator!Hermitian operator}\index{Hermitian!Hermitian operator} in the Hilbert space, i.e.}
\begin{center}
\begin{tabular}{ccc}
\bottomrule[1.5pt]
\it physical (dynamical) &&\it corresponding \\
\it observable &&\it hermitean operator\\
\hline
\it total energy $ E $&$\longleftrightarrow$& $ \mathbf{H}=\mathbf{H}^\dag $\\
\it coordinate $ \vec{x} $&$\longleftrightarrow$& $ \mathbf{X}=\mathbf{X}^\dag $\\
\it angular momentum $ \vec{l} $&$\longleftrightarrow$&$ \mathbf{L}=\mathbf{L}^\dag $\\
\toprule[1.5pt]
\end{tabular}
\end{center}
}
The physical quantity measured in the system for the corresponding observable is obtained by taking the inner product of the pair $ \psi $ and $ \mathbf{A}\psi $, i.e.
\begin{align}
\langle\mathbf{A}\rangle =(\psi,\mathbf{A}\psi),~~~~~~~~~~~
\end{align}
\noindent which is called the \textbf{expectation value}\index{expectation value} of dynamical quantity $ \mathbf{A} $ for the system in the state $ \psi $, which is normalized, i.e. $ \norm{\psi}=1 $.
Since the action of operator $ \mathbf{A} $ upon the vector $ \psi $ changes it into another vector $ \phi $, which implies that the action of the measurement of the dynamical quantity in a certain state usually would disturb the physical system and the original state is changed into another state due to the external disturbance accompanying the measurement.
In particular, if an operator $ \mathbf{A} $ such that $ \mathbf{A}\psi_a=a\psi_a $, i.e. when $ \mathbf{A} $ acts upon a particular physical state $ \psi_a $, the resultant state is the same as the one before, then it is said that the physical state is prepared for the measurement of the \textbf{dynamical observable}\index{dynamical observable} associated with the operator $ \mathbf{A} $. We shall name:
\begin{itemize}
\item $ \psi_a $ : the state particularly prepared in the system for the measurement of the dynamic quantity, called the \textbf{eigenstate}\index{eigenstate} of the operator $ \mathbf{A} $.
\item $ a $ : the value of the measurement of the dynamical quantity in the particular prepared state, called the \textbf{eigenvalue}\index{eigenvalue} of the operator $ \mathbf{A} $.
\end{itemize}
We shall now explore some properties concerning the eigenvectors\index{eigenvector} and the eigenvalues through a few propositions.
\proposition{
\subsubsection*{ \textit{Proposition 1.}}
\textit{The eigenvalues for a Hermitian operator\index{operator!Hermitian operator}\index{Hermitian!Hermitian operator} are all real.}}
Let $ \mathbf{A}\psi_a=a\psi_a $ and $ \mathbf{A}^\dag\psi_a=a\psi_a $, and consider the inner product $ \langle\mathbf{A}\rangle_{\psi_a}=(\psi_a, \mathbf{A}\psi_a)=(\psi_a,a\psi_a)=a(\psi_a,\psi_a)=a $.
On the other hand, we have $ \langle\mathbf{A}\rangle_{\psi_a}=(\mathbf{A}^\dag\psi_a,\psi_a)=(\mathbf{A}\psi_a,\psi_a)=a^*(\psi_a,\psi_a)=a^* $ which implies $ a=a^* $ if $ \psi_a $ is not a null vector.
\proposition{
\subsubsection*{\textit{Proposition 2.}}
\textit{Two eigenvectors of a Hermitian operator are orthogonal to each other if the corresponding eigenvalues are unequal.}}
Let $ \mathbf{A}\psi_a=a\psi_a $ and $ \mathbf{A}\psi_b=b\psi_b $ where $ \psi_a\neq\psi_b $, and since
\begin{align*}
(\psi_a,\mathbf{A}\psi_b)=b(\psi_a,\psi_b)
=(\mathbf{A}^\dag\psi_a,\psi_b)=a^*(\psi_a,\psi_b)=a(\psi_a,\psi_b),
\end{align*}
\noindent therefore $ (a-b)(\psi_a,\psi_b)=0 $. That implies $ (\psi_a, \psi_b)=0 $ if $ a\neq b $.
It often occurs that there exists more than one eigenvector of an operator with the same eigenvalue. Consider the Hermitian operator $ \mathbf{C} $, such that
\begin{align}
\mathbf{C}\psi_{c_i}=c_i\psi_{c_i},~~~~~~~~~~~~~~~~~~~~
\end{align}
\noindent where $ c_1=c_2=\ldots =c_m=c $, and $ (\psi_{c_1},\psi_{c_2},\ldots ,\psi_{c_m}) $ are linearly independent.
The eigenvalue $ c $ is called $ \bm{m} $-\textbf{fold degenerate}\index{m-folddegeneracy@$ m $-fold degenerate}\index{degeneracy!m-folddegeneracy@$ m $-fold degenerate} if there are $ m $ linearly independent eigenvectors corresponding to the same eigenvalue $ c $ of the operator $ \mathbf{C} $.
\proposition{
\subsubsection*{\textit{Proposition 3.}}
\textit{If the eigenvalue} $ c $ \textit{of the operator} $ \mathbf{C} $ \textit{is degenerate\index{degeneracy}, any linear combination of the linearly independent eigenvectors is also an eigenvector.}}
Due to the linearity of operator $ \mathbf{C} $, the linear combination $ \sum\alpha_i\psi_{c_i} $ is also the eigenvector of $ \mathbf{C} $ with the eigenvalue $ c $.
By means of the Gram-Schmidt orthogonalization process, one is able to easily construct a new set of orthonormal vectors $ \{\tilde{\psi}_{c_1},\tilde{\psi}_{c_2},\ldots,\tilde{\psi}_{c_m}\} $ out of the previous m linearly independent set $ \{\psi_{c_1},\psi_{c_2},\ldots,\psi_{c_m}\} $, such that
\begin{align}
(\tilde{\psi}_{c_i},\tilde{\psi}_{c_j},)=\delta_{ij}.~~~~~~~~~~
\end{align}
Following the results of Propositions 2 and 3, we conclude that
\begin{align}
(\tilde{\psi}_{a_i},\tilde{\psi}_{b_j},)=\delta_{ab}\delta_{ij},~~~~~~~
\end{align}
\noindent where $ a,b $ refer to the eigenvalues and $ i,j $ refer to the index of degeneracy. We shall drop \~{} on top of the orthonormal basis without mentioning it further.
\proposition{
\subsection*{\bfseries\itshape\emph 2\begin{small}nd\end{small} postulate of quantum mechanics:}\index{postulates of quantum mechanics!2nd postulate of}
\it The set of eigenvectors $ \psi_{a_i} $ of a given Hermitian operator corresponding to a physical observable form the basis of a Hilbert space.
Any state in the physical system can be denoted by a vector in the Hilbert space as a linear combination of $ \psi_{a_i} $, i.e. $ \psi=\sum \alpha_{a_i}\psi_{a_i} $, where $ (\psi_{a_i},\psi_{a'_j})=\delta_{aa'}\delta_{ij} $.}
The coefficient $ \alpha_{a_i} $ is obtained by taking the inner product
\begin{align}
(\psi_{a_i},\psi)=\alpha_{a_i}.~~~~~~~~
\end{align}
The formulation of the 2nd postulate of quantum mechanics is purely artificial. In fact, it has been proved and well studied that a function space can be spanned by the eigenvectors of a Hermitian Sturm-Liouville operator. Since the dynamical operators in quantum system are not confined to those of the Sturm-Liouville form, we would formulate on purpose the second postulate of quantum mechanics in order to build and integrate the whole mathematical structure and the logical development of the quantum theory on a solid and self consistent ground.
\section[Commutability and compatibility of dynamical observables]{Commutability and compatibility of dynamical\\ observables}
\index{commutability}\index{compatibility}
We shall introduce in the following subsections some terminologies which will help to differentiate various types of the compatible observable.
\subsection{Compatible observables}
If there exist a complete set of linearly independent vectors $ \psi_{a_i} $ which are eigenstates of both operators $ \mathbf{R} $ and $ \mathbf{S} $, then the two physical observables corresponding respectively to Hermitian operators $ \mathbf{R} $ and $ \mathbf{S} $ are said to be \textbf{compatible}\index{compatible observables}.
\proposition{
\subsubsection*{\textit{Proposition 4.}}
\textit{If two observables are compatible, their corresponding operators $ \mathbf{R} $ and $ \mathbf{S} $ commute, i.e. $ [\mathbf{R},\mathbf{S}]=0 $.}}
It is obvious because $ \mathbf{R} $ and $ \mathbf{S} $ have the following properties:
$$
\mathbf{R}\psi_a=r_a\psi_a,~~\mbox{and}~~\mathbf{S}\psi_a=s_a\psi_a, $$
\noindent which lead to
$$
(\mathbf{RS-SR})\psi_a=[\mathbf{R},\mathbf{S}]\psi_a=\mathit{0},
$$
\noindent if we define the \textbf{commutator}\index{commutator} of $ \mathbf{R} $ and $ \mathbf{S} $ as $ [\mathbf{R},\mathbf{S}]=\mathbf{RS-SR} $.
\noindent Therefore $ [\mathbf{R},\mathbf{S}]\psi=\mathit{0} $ for any $ \psi $, and Proposition 4 is established.
\proposition{
\subsubsection*{\textit{Proposition 5.}}
\textit{If} $ \mathbf{R} $ \textit{and} $ \mathbf{S} $ \textit{are operators corresponding to two compatible observables, and if} $ \psi_r $ \textit{are eigenvectors of} $ \mathbf{R} $, \textit{then}
\begin{align}
(\psi_r,\mathbf{S}\psi_{r'})=0, ~~\mbox{for}~~~r\neq r'. ~~~~~~~~
\end{align}
}
The proof of Proposition 5 is straightforward, i.e.
\begin{align*}
r'(\psi_r,\mathbf{S}\psi_{r'})=(\psi_r,\mathbf{SR}\psi_{r'})=(\psi_r,\mathbf{RS}\psi_{r'})=(\mathbf{R}\psi_r,\mathbf{S}\psi_{r'})
=r(\psi_r,\mathbf{S}\psi_{r'}).
\end{align*}
We have $(r'-r)(\psi_r,\mathbf{S}\psi_{r'})=0$. Hence Proposition 5 is proved.
\proposition{
\subsubsection*{\textit{Proposition 6.}}
\textit{If} $ \mathbf{P}_r $ \textit{is the projection operator onto subspace\index{subspace}\index{space!subspace} with vectors} $ \psi_r $, \textit{then}
$$\begin{normalsize}
•
\end{normalsize}
[\mathbf{P}_r,\mathbf{S}]=0.~~~~~~~~
$$
}
The proof is again straightforward. For
~\\[-24pt]
\begin{align}
\mathbf{P}_r\psi_{r'}=\delta_{rr'}\psi_{r'},~~~~
\end{align}
\noindent we have
\begin{align*}
\notag (\psi_{r'},[\mathbf{P}_r,\mathbf{S}]\psi_{r''})&=(\psi_{r'},(\mathbf{P}_r\mathbf{S}-\mathbf{S}\mathbf{P}_r)\psi_{r''})\\
&=(\mathbf{P}_r\psi_{r'} ,\mathbf{S}\psi_{r''})-(\psi_{r'},\mathbf{SP}_r\psi_{r''})\\
&=(\delta_{rr'}-\delta_{rr''})(\psi_{r'},\mathbf{S}\psi_{r''})\equiv 0, \mbox{~(prove it)}
\end{align*}
\noindent that leads to $ (\psi,[\mathbf{P}_r,\mathbf{S}]\psi)=0 $ for all $ \psi $, hence
\begin{align}
[\mathbf{P}_r,\mathbf{S}]=0.~~~~~~~~~
\end{align}
\proposition{
\subsubsection*{\textit{Proposition 7.}}
\textit{If} $ \mathbf{R} $ \textit{and} $ \mathbf{S} $ \textit{are two commuting Hermitian operators, there exists a complete set of states which are simultaneously eigenvectors of} $ \mathbf{R} $ \textit{and} $ \mathbf{S} $.}
Let us construct a vector $ \phi_r^{(s)} $, projected by $ \mathbf{P}_s $ upon the vector $ \psi_r $ which is the eigenvector of the Hermitian operator $ \mathbf{R} $, i.e.
\begin{align}
\phi_r^{(s)}=\mathbf{P}_s\psi_r.~~~~~~~~~
\end{align}
It is obvious that $ \phi_r^{(s)} $ is automatically the eigenvector of the Hermitian operator $ \mathbf{S} $ with eigenvalue $ s $, namely
\begin{align}
\mathbf{S}\phi_r^{(s)}=s\phi_r^{(s)}.~~~~~~~~~~~
\end{align}
On the other hand, Proposition 6 ensures that $ \phi_r^{(s)} $ is also an eigenvector of the operator $ \mathbf{R} $ , because
\begin{align}
\mathbf{R}\phi_r^{(s)}=\mathbf{R}\mathbf{P}_s\psi_r = \mathbf{P}_s\mathbf{R}\psi_r = r\mathbf{P}_s\psi_r = r \phi_r^{(s)}.
\end{align}
\noindent Hence Proposition 7 is proved.
\subsection{Intrinsic compatibility of the dynamical observables and the direct product space}\label{intrinsic-compatibility-section}\index{space!direct product space}\index{direct product!of spaces}
\index{dynamical observable}\index{intrinsic!intrinsic compatibility}
We have seen in the last section that if two operators corresponding respectively to two dynamical observables commute with each other, there always exists a complete set of states which are simultaneously eigenvectors of these two operators. The construction of the simultaneous eigenvectors could be simplified even further in some particular cases in which the compatibility of these two operators is solely based upon the first and the second fundamental \textbf{commutation relations}\index{fundamental commutation relation}, i.e. $ [q_i,q_j]=[p_i,p_j]=0 $, without making use of the third fundamental commutation relation. The dynamical observables corresponding to the commuting operators in this particular category are said to be intrinsic compatible. The construction of the simultaneous eigenvectors of the intrinsic compatible observables\index{intrinsic!intrinsic compatible observables} is formulated in the following proposition.
\proposition{
\subsubsection*{\textit{Proposition 8.}}
\it Let $ \mathbf{A} $ and $ \mathbf{B} $ be two operators corresponding respectively to two intrinsic compatible observables, and let $ \psi_{a_i} $ and $ \varphi_{b_j} $ be the eigenvectors of $ \mathbf{A} $ and $ \mathbf{B} $ with the eigenvalues $ a_i $ and $ b_j$ respectively, then the direct product\index{direct product!of states} of $ \psi_{a_i} $ and $ \varphi_{b_j} $, denoted by $ \psi_{a_i}\otimes\varphi_{b_j} $ is the eigenvector of the operator $ \mathbf{F}(\mathbf{A},\mathbf{B}) $ with the eigenvalue $ F(a_i,b_j) $.}
It can be easily shown that $ \psi_{a_i}\otimes\varphi_{b_j} $ is the simultaneous eigenvector of $ \mathbf{A} $ and $ \mathbf{B} $ with eigenvalues $ a_i $ and $ b_j $ respectively if we make the following identifications:
$$
\mathbf{A}=\mathbf{F}(\mathbf{A},\mathbf{I}),~~~\mbox{and }~~~\mathbf{B}=\mathbf{F}(\mathbf{I},\mathbf{B}).
$$
Two operators in different Hilbert spaces are always of intrinsic compatibility. Consider the system of a particle with spin, the physical observable associated with the configuration space is compatible with the physical observable in the spin space. Therefore one is able to express the quantum state as the direct product of a vector in the configuration space and another vector in spin space.
We shall leave it for the reader to verify that Proposition 5 through Proposition 7 are consistent with the above formulation. We will also elaborate more on the algebra of the direct product space in Section \ref{direct-product-section} in order to provide a rigorous proof for Proposition 8.
\subsection[3rd postulate of quantum mechanics and commutator algebra]{3rd postulate of quantum mechanics and\\ commutator algebra}\index{commutator}
\proposition{
\subsection*{\emph 3\begin{small}rd\end{small} postulate of quantum mechanics:}\index{postulates of quantum mechanics!3rd postulate of}
\it
Every Poisson bracket in classical mechanics for canonical variables\index{canonical commutation relations} $ (p_i,q_j) $ is replaced by the commutator of the corresponding operators with the following relations:
\begin{center}
\begin{tabular}{ccc}
\bottomrule[1.5pt]
Classical mechanics&&Quantum mechanics\\
\hline
$ [q_i,q_j]=0 $&$ \rightarrow $&$ [Q_i,Q_j]=0 $\\
$ [p_i,p_j]=0 $&$ \rightarrow $&$ [P_i,P_j]=0 $\\
$ [p_i,q_j]=\delta_{ij} $&$ \rightarrow $&$ [P_i,Q_j]=\displaystyle\frac{\hbar}{i}\delta_{ij} $\\[-10pt]
&&\\
\toprule[1.5pt]
\end{tabular}
\end{center}
\noindent where $ h=2\pi\hbar $ is Planck's constant.}
We discuss the commutator algebra for further applications in later chapters.
The commutator of operators $ \mathbf{A} $ and $ \mathbf{B} $, as it is defined previously
\begin{align}
[\mathbf{A},\mathbf{B}]=\mathbf{AB}-\mathbf{BA},&~~~~\quad\quad\quad\quad
\end{align}
\noindent then we have
\addtocounter{equation}{1}
\begin{align}
[\mathbf{A},\mathbf{B}]&= -[\mathbf{B},\mathbf{A}],\tag{\arabic{chapter}.\arabic{equation}a}\\
\protect[\mathbf{A},\mathbf{A}]&=\mathbf{O},\tag{\arabic{chapter}.\arabic{equation}b}\\
\protect[\mathbf{A},\mathbf{B}+\mathbf{C}]&=[\mathbf{A},\mathbf{B}]+[\mathbf{A},\mathbf{C}],\tag{\arabic{chapter}.\arabic{equation}c}\\
\protect[\mathbf{A},[\mathbf{B},\mathbf{C}]]&+[\mathbf{B},[\mathbf{C},\mathbf{A}]]+[\mathbf{C},[\mathbf{A},\mathbf{B}]]=\mathbf{O}.\tag{\arabic{chapter}.\arabic{equation}d}
\end{align}
An operator $ \mathbf{C} $ is called a \textbf{constant operator}\index{operator!constant operator} if it commutes with any operator corresponding to the dynamical observables. Obviously any real number times unit operator $ \mathbf{I} $ is a Hermitian constant operator.
When an operator is exponentiated, namely $ e^{\mathbf{A}} $, it is defined as the usual sense of the exponential function, i.e.
\begin{align}
e^{\mathbf{A}}=\mathbf{I}+\mathbf{A}+\frac{1}{2!}\mathbf{A}^2+\frac{1}{3!}\mathbf{A}^3+\ldots .
\end{align}
We shall now show a useful identity
\begin{align}
e^{\mathbf{A}}\mathbf{B}e^{-\mathbf{A}}\equiv\mathbf{B}+\frac{1}{1!}[\mathbf{A},\mathbf{B}]+ \frac{1}{2!}[\mathbf{A},[\mathbf{A},\mathbf{B}]]+\ldots .
\end{align}
\subsubsection*{Proof}
Let $ f(\lambda)=e^{\lambda\mathbf{A}}\mathbf{B}e^{-\lambda\mathbf{A}} $ and expand the function $ f(\lambda) $ in terms of power series of $ \lambda $ at $ \lambda=0 $, i.e.
$$
f(\lambda)=f(0)+\frac{\lambda}{1!}f'(0)+\frac{\lambda^2}{2!}f''(0)+\ldots .
$$
Since,
$$
f'(\lambda)=\mathbf{A}f(\lambda )-f(\lambda )\mathbf{A}=[\mathbf{A},f(\lambda )],
$$
\noindent we can evaluate each order of the derivatives of $ f(\lambda) $ at $ \lambda=0 $, i.e.
\begin{align*}
f(0)&=\mathbf{B},\\
f'(0)&=[\mathbf{A},\mathbf{B}],\\
f''(0)&=[\mathbf{A},[\mathbf{A},\mathbf{B}]],
\end{align*}
\noindent and thus
$$
f(\lambda)=f(0)+\frac{\lambda}{1!}f'(0)+\frac{\lambda^2}{2!}f''(0)+\ldots
$$
\noindent becomes
$$
f(\lambda)= e^{\lambda\mathbf{A}}\mathbf{B}e^{-\lambda\mathbf{A}} = \mathbf{B}+\lambda [\mathbf{A},\mathbf{B}]+\frac{\lambda^2}{2!}[\mathbf{A},[\mathbf{A},\mathbf{B}]]+\ldots .
$$
Therefore we reach the identity by setting $ \lambda =1 $ in the function $ f(\lambda ) $, i.e.
\begin{align}\label{bukef1}
f(1)= e^{\mathbf{A}}\mathbf{B}e^{-\mathbf{A}} \equiv \mathbf{B}+ [\mathbf{A},\mathbf{B}]+\frac{1}{2!}[\mathbf{A},[\mathbf{A},\mathbf{B}]]+\ldots .
\end{align}