Blog
Welcome to my blog. Click here to see a chronological index of all the posts I've written.
John Nash's PhD Thesis
[link—standalone]
Recently I've been working on transcribing the PhD thesis of the legendary John Nash, titled "Noncooperative games". As you may know, Nash is a famous mathematician who did work in differential geometry and PDE's, but most famously pioneered game theory. His most famous result is a concept called "Nash equilibrium", used widely in economics: it was so famous they even made a movie after him! (A Beautiful Mind, 2001).
His thesis was written in 1950, back when LaTeX wasn't a thing and people used typewriters and wrote out math by hand. I tried to find a LaTeX version of his thesis online to no avail, so I decided to transcribe it myself. It's helped me learn some game theory, while figuring out the details of formatting longer LaTeX documents.
However, my workarounds to certain typesetting dilemmas are kind of janky, and sometimes I'm not sure if something is correct or not. Pull requests are welcome, and you can view my progress here.
Thu, 31 Dec 2020 22:48
Running Log: Swimming
[link—standalone]
Date 
Location 
Time 
Distance 
12/22 
Oak Point Pool 
45 min 
2 km 
Running log posts are not actually about running, and today's running log is about swimming. I've got one leg of the triathlon down (running), and started working on the other. However, I rarely swim, and even when I do, I do not enjoy it. I want to change that, so I decided to (voluntarily) hop in the pool for the first time in forever.
Usually while swimming I'm too busy focusing on how much I'm suffering to reflect on my thoughts, but since today was a voluntary expidition, I was able to gather my observations and focus on my surroundings more, like how I do when I run.
Swimming is inherently a brutal sport. Man isn't built for the water, and being in a pool is like being surrounded by a foreign substance on all sides. Looking around underwater in the deep end, you see a huge expanse of emptiness, with mannequinlike figures comically floating at the top. You are one of them, of course. When I run, sometimes I feel insignifant before a long stretch of nature, earth lining your path all the way to the sun. I can imagine cyclists having the same feeling, but I never thought I would feel that way in the middle of a tiny commercial pool.
There's a book I quite like called "What I Talk About When I Talk About Running", which I've somewhat written about before. Murakami often talks about a deep, dark, place, one where he retreats to to get his most serious writing done. In a sense, swimming through a pool feels like going through the tunnel: you're a tiny, insignificant figure surrounded by a foreign substance on all sides, trudging your way through to reach a goal. In the pool is silence, besides the ambient aquatic noises and the rhythmic 123 breathing pattern. The walls offer you an escape to reality, with awful Christmas songs and small chatter, like rungs on a ladder. But I didn't rest during my swim, so it was an immersive experience.
On a side note, I've been trying to do more physical activies without breaks, like Murakami. I think his reasoning is pretty solid: if you paid to get into a race, you paid to run, why walk? In the same vein, I showed up to the pool to swim, why should I keep hanging on the ledge?
Fri, 25 Dec 2020 22:47
Nooo don't delete your account, how else are we supposed to invade your privacy and steal your data?
[link—standalone]
Back in May, I went through the process of deleting my prescence from the internet in the form of 150+ accounts I signed up for over my time on this Earth. Culling the accounts allowed me to sort corporations into good, meh, or bad, depending on how long it took me to delete the accounts. The philosophy behind this is that the more a company actually cares about your privacy, the easier it is for you to find your rights and opt out if desired. The more evil a company is the longer they want to harvest your data, and thus the harder they make the process of deleting accounts.
Good
Thank you Hulu for having a big easy delete button and transparent privacy policy! And every other company that doesn't fall into the next couple of categories. Not that I support you guys, but thanks anyway.
Bad
Every company that required me to send an email to privacy@blah.com. The average person isn't going through that hassle, and thus they reap the good user freedoms for cash. Most of them also displayed banners on their websites that say "We care about your privacy!", but if you read the fine print, it says "But only in California". Suprisingly, most of the obscure small companies (like typing sites, cloud coding websites, etc) went here.
Horrible
The worst offenders by far are exactly who you think they are: Facebook, Amazon, Google, etc. Amazon in particular required you to call a service representative and sit through a painful conversation where they try to dissuade you (Jeff don't you have enough money already?). Facebook never actually deletes your account, it just sits there on your servers, and they make frequent backups. The way to get around this is manually removing all your friends and scrambling your information to mess with backups: of course, Facebook actively makes automating this process difficult. Then do this again, and again, until you're confident the backups are fried. Absolutely disgusting.
Google is suprisingly cooperative, but that's to keep up a pretense of being the "good guy" of the internet— more on this and "don't be evil" later.
On a similar vein, browsing the web without accounts is atrocious for sites like Reddit, Instagram, etc. So many features are locked for those who don't get the app or make an account, in other words, those who don't agree to being tracked all the time. Every website nowadays is pushing you to make a *free* account so they can invade your privacy without displaying an "accept our invasive cookies" page.
The web is falling into bloat and despair, and nobody sane really has an alternative. Those who resist are seen as privacy freaks who care way too much, and we really only do the bare minimum. If the only alternative to evil is extremism, what kind of dystopia do we live in?
Fri, 18 Dec 2020 19:20
Running Log: Why I Don't Use Strava
[link—standalone]
Date 
Location 
Time 
Distance 
12/17 
Bluebonnet Trail 
?? min 
2 ish km 
The semester is wrapping up, and after spending far too much time thinking about torus knots and loopspaces, I decided to go for a run (with my dog).
Usually when I run not much goes through my head, and today was no exception, aside from the occasional "I'm out of shape". I haven't been running too much this semester, which is a natural consequence of being enrolled too many math courses. As you can see, the stats are vague— this is because I ran without looking at the time or how far I ran. As a former user of Strava, I adopted the habit of running without my phone after becoming a digital hermit. Going out to run without Strava recording was kind of strange at first, I mean, did the run even happen if Strava didn't record it? I got used to it over time though.
I guess the reason why I stopped logging my runs is a mix of minimalism and privacy. I mean for privacy it's duh, a corporation (who has made questionable decisions) making a closed source "app" for your monitoring device that carefully tracks your location for several hours. Yippee, sign me up. But running without logging it may be beneficial even for the non privacy oriented. Before, when I wanted to go for a run, I would have to put on my flipbelt to hold my phone, fumble for a bit at the door to start recording, and then finally get to running. Now, I just lace up my shoes and head out— no having to bring unnecessary things or worrying about your splits. In a sense, running is the ultimate minimalist sport, because all you need are shoes and the great outdoors. Why ruin that with fancy equipment or tracking devices?
Another thing that bothered me about Strava was the social aspect, namely, how it hijacks our need for attention to maintain a userbase. I mean yeah, social media is a much worse offender of this. While the like (in this case, kudos) button is used to feed our need for dopamine and attention, drawing us into the service, I would much rather have companies use this to get people to exercise as opposed to internet domination (cough Google). The sneaky thing about Strava is though, you don't really think of it as a traditional social media, while having the same downsides. Thinking of witty captions is harmless and all, but at one point I realized I was only running so I could see my runs logged on Strava. At this point, there's a problem. Of course, I don't have anything against Strava users, I'm just saying why I personally don't use it. I do end up losing access to running data, which is somewhat annoying at times. Maybe I should just measure how long I run with measuring tape.
Thu, 17 Dec 2020 18:22
What I've Been Studying: Attack of the Categories
[link—standalone]
Now that the institution of learning has temporarily ceased operation, I can finally get to studying academic material, which is what I wanted to do all along. Sadly school got in the way. I've seen people post what they've been learning online, not so much as something that others would be interested in, but rather as a time capsule and reflection of sorts. Which seemed like a mighty fine idea, hence this post. Here's what I've been studying over Thanksgiving break (more loosely, somewhat recently), now that I have no tedious homework or essays to write. The main topic was algebraic topology, as always.
Category Theory
As the title suggests, I've been brushing up on some category theory.
 Specifically, I added some constructions to my list of examples like the homotopy category $\mathsf{hTop}$ and the chain complex functors and algebraic homology functor, the latter two when composed assign a sequence of abelian homology groups to a space.
 The biggest development was wrapping my head around products and coproducts of categories. These satisfy universal properties, that is, when they exist relative to some morphisms, they're unique. Coproducts are the dual categorical construction of products by reversing the arrows, they take inclusion maps into a space uniquely, the set theoretic example being the disjoint union. We can also generalize these to more than two objects by means of cones and cocones, and later on limits and colimits.
 I reviewed what monomorphisms and epimorphisms were, and realized the boundary maps $H_n(X,A)\overset{\partial}{\longrightarrow}H_{n1}(A)$ and the the changeofcoefficient homomorphisms $H_n(X;G_1)\to H_n(X;G_2)$ as natural transformations.
 If I have time, I plan on learning about the Yoneda lemma, limits and colimits, pullbacks and pushouts, and abelian categories.
Algebraic Topology
Everything I learn is to aid my study of algebraic topology, of course, but here is the pure algebraic topology concepts I covered. The big development was reveiwing shaky foundations, including
 Cell complexes. Hatcher's exposition was way too dense for me at the beginning of the semester, but now that I've matured a little I revisited it in depth and it actually makes a lot of sense. The general idea of building complexes by attaching higher dimensional cells is intuitive (and worked somewhat for most of the semester), but I finally worked out the details of the structure of attaching maps and cells. Specifically, forming the $n$skeleton $X^n$ from $X^{n1}$ by attaching $n$cells $e_{\alpha}^n$ via maps $\varphi_{\alpha} \colon S^{n1} \to X^{n1}$, where $X^n=X^{n1}\amalg_{\alpha}D_{\alpha}^n/\sim$ and $\sim$ is the relation $x\sim \varphi_{\alpha}(x)$ for $x\in \partial D_{\alpha}^n=S^{n1}$ to get $X^n=X^{n1}\amalg_{\alpha}e_{\alpha}^n$ made no sense to me. So I worked out examples with $n=1,2,3$ in detail, which illuminated the attaching formula significantly.
 There were also portions of general CW theory that I missed, like what a characteristic map was, and the formal definitions of a subcomplex and CW pairs.
 I covered the constructions you can do with CW complexes in depth. For example:
 Products: Essentially the complex with product cells ranging over the base complexes.
 Quotients: These take a CW pair $(X,A)$ and smoosh $A$ down to a point (or $0$cell).
 Suspension: An important yet seemingly weird construction, build from two cones, defined as the quotient $(X\times I)/(X\times \{0\})$ to a point. For example, the suspension of the circle $S^1$ looks like a twosided top.
 Join: This joins two spaces $X$ and $Y$ by line segments between points, formally $X\times Y\times I/\sim$ with the relation $(x,y_1,0)\sim(x,y_2,0)$ and $(x_1,y,1)\sim(x_2,y,1)$. This is a particularly confusing construction, so I worked out several examples in detail. You can also view a simplex as an iterated join of points.
 Wedge Sum: This fundamental construction is made by identifying two spaces together at a point. For example, $S^1\vee S^1$ is the figure eight, $S^1\vee S^1\vee S^1$ is the fidget spinner, and a wedge sum of $n$ circles is the $n$bouquet. A neat fact is that for any complex, the quotient $X^n/X^{n1}$ is the wedge sum of $n$spheres $\bigvee_{\alpha}S_{\alpha}^n$, since collapsing the previous structure to a point leaves us with a bunch of $n$cells joined at a $0$cell, precisely the wedge sum of $n$spheres.
 Smash Product: This is a construction I have never used before, to be honest. Its the quotient of the product by collapsing the wedge sum, or $(X\times Y)/(X\vee Y)$. You can think of it as deleting overlapping structure of the product. It makes a lot more sense with examples, some of them being:
 The smash product of two circles $S^1\wedge S^1$ is the torus $\mathbb T$ quotient the figure eight, which can the thought of simultaneously contracting the inner ring and an outer ring, so the resulting structure is the simplyconnected $2$sphere $S^2$.
 Smashing any space $X$ and the two point sphere $S^0$ results in $X$, because $X\times S^0$ is simply two copies of $X$, and you identify a copy of $X$ to a point in the other copy of $X$, which is just $X$ again.
 To generalize the first example, the smash product of an $n$sphere and an $m$sphere is the $(n+m)$sphere, or $S^n\wedge S^m=S^{n+m}.$ To see why, the product CW structure has four cells, a zero cell, an $n$cell, an $m$cell, and an $(n+m)$cell. The wedge sum wedges everything but the $(n+m)$cell together, and collapsing that to a point yields an $(n+m)$cell plus a $0$cell, which is precisely $S^{n+m}$.
 We also have that collapsing a contractible subcomplex to a point preserves homotopy type, which can be used to show that finite connected graphs look like wedges of $S^1$, and $S^2/S^0$ and $S^1\vee S^2$ are homotopy equivalent.
 I also reviewed the Euler characteristic $\chi(X)$ defined by even cells minus odd cells, or formally $\chi(X)=\sum_{n=0}^k(1)^nc_n$. With $k=2$, and $0$cells as vertices, $1$cells as edges, and $2$cells as faces, this turns out to be the familiar formula $VE+F=2$ for convex polyhedra.
 I did an in depth study of the real projective space $\mathbb R \mathrm P^n$, focusing on how to visualize the realization of $\mathbb R \mathrm P^n$ as the sphere with antipodal points identified, that is, how $\mathbb R\mathrm P^n\cong S^n /(v\sim v)$. Dealing with lower dimensional cases helped, the general idea is to realize lines as a specific angle in the upper hemisphere, since they go through both and considering two sets of angles would be redundant. Then, we can inductively build $\mathbb R\mathrm P^n$ from $\mathbb R \mathrm P^0,\mathbb R \mathrm P^1$, etc by giving $\mathbb R \mathrm P^n$ a CW structure, which turns out to be one $n$cell for each dimension. That is, $\mathbb R \mathrm P^0=e^0\cup e^1\cup \cdots \cup e^n$.
 I also covered the complex projective space $\mathbb C \mathrm P^n$, and how it has a CW structure $\mathbb C\mathrm P^n=e^0\cup e^2\cup \cdots \cup e^{2n}$. Something I would like to understand better is how exactly we visualize the construction of $\mathbb R \mathrm P^2$ and $\mathbb C \mathrm P^1$, since these immerse in $\mathbb R^3$ (Boy's surface), but how exactly the immersions were constructed is unclear to me.
 Since we're going to talk about homotopy theory next week, I read a little bit about the higher homotopy groups of spheres in Hatcher, and started to define the higher homotopy groups.
 Finally, I began reviewing basic fundamental concepts like the fundamental group (haha), and the fact that $\pi_1(S^1)=\mathbb Z$, since I didn't understand that proof the first time around. I also plan on reviewing basic covering space theory.
Group Theory
Before UT, I thought I knew everything there was to know about groups. This turned out to be a horribly misinformed perception of reality.
 To complement the study of products in category theory, I studied the direct product in depth. There's more to it than "addition componentwise", it turns out if you have two normal subgroups $H,K\trianglelefteq G$ where you can write every $g\in G$ uniquely as a product $hk$ for $h\in H$ and $k\in K$, and $H,K$ have trivial intersection, then the amazing fact follows that $G$ must be isomorphic to the product $H\times K$!
 Now I'm working on wrapping my head around the semidirect product, a generalization of the direct product by relaxing the normality of one of the subgroups. This means we have to consider the action induced by a homomorphism into the automorphism group of the normal subgroup, to defined the product as $(h_1,k_1)(h_2,k_2)=(h_1k_1\cdot h_2,k_1,k_2)$ (where $\cdot$ denotes the left action of the homomorphism $\varphi$). This is a generalization because if $\varphi$ is the trivial homomorphism the operation simply becomes the same as componentwise multiplication.
 I also reviewed the idea of free abelian groups, which are defined in a linear algebraic fashion but turn out to be isomorphic to copies of $\mathbb Z$, which I proved as an exercise. Furthermore, they have a basis that generates the group, and every basis has the same amount of elements.
 If I have time, I plan on completing the proof of the fundamental theorem of finitely generated abelian groups, something I have been interested in since my first abstract algebra course. The theorem says that every finitely generated abelian group $A$ is isomorphic to copies of $\mathbb Z$ mod prime power (nonunique), plus copies of the infinite cyclic $\mathbb Z$, or more concisely\[
A \simeq \bigoplus_i \mathbb Z_{p_i^{k_i}}\oplus \mathbb Z^n.
\]
LaTeX
Planned Learning
 Everything mentioned at the end of the sections above, of course.
 Our complex analysis course never covered things like analytic continuation and what a meromorphic function was, so I'll probably take some notes on such topics in my free time.
 I also want to study more linear algebra, since at UNT we didn't cover very much (as it was a course for engineers). Some topics include reviewing orthogonal and orthonormal bases, learning about inner product spaces, dual spaces, and tensors.
 Finally, over winter I plan to read chapter 5 of Pugh's Analysis (the section on multivariate analysis), in preparation for my differential topology and Riemannian geometry courses next semester.
Detailed notes on every topic mentioned can be found
here (or more generally,
here).
Sun, 29 Nov 2020 14:24
All Voting Systems are Fundamentally Screwed
[link—standalone]
In light of the recent election, there's a lot of talk going on about alternate voting systems like ranked choice, with claims like "Libertarians are stealing our votes!" and "with ranked choice, ____ would have won!" Here's a mathematical approach to why any proposed alternative wouldn't work. This post is based off notes from the first UT math club lecture given by Tom Gannon, you can find them here. Sorry for the clickbait title. Without further ado, let's begin.
How would we precisely define what a voting system is? Informally, we would probably need a list of alternatives to choose from, for each person to make an ordered list of such alternatives, and have a societal preference list as the output. We would also prefer there to be more than two alternatives, since the two party system has been the cause of much complaint. Mathematically, we would define two sets $A=\{\text{set of outcomes}\}$ and $N=\{\text{set of voters}\}$. Then a voting system would be defined as a function \[
F \colon L(A)^N \to L(A),
\] where $L(A)$ denotes the set of all total orderings of $A$. Each person's individual ballot would be an $N$tuple $(R_1,\cdots,R_N)\in L(A)^N$ (also called a preference profile). Let's give some examples of voting systems:
 First Past the Post: This is the system we use today. Everyone will submit a ballot with their top choice, and whoever gets picked as top choice the most will win.
 Borda count: This, along with instantrunoff, is a version of "ranked choice" voting that people ask for. Rate your candidates from 110 (not literally), then everyone's ratings get summed up: whichever candidate with the most points wins.
 Last Past the Post: This isn't real, but serves to push the definition of a voting system: it's like FPTP (First Past the Post), but whoever gets picked as top choice the least wins. By definition, this is still a voting system.
 Dictatorship: Our favorite system. There exists a dictator among the voters, and the societal preference list is literally just the dictator's ballot.
We would probably want our voting system to satisfy some reasonable conditions. For example, if everyone puts in the exact same ballot, then the output should be that ballot that everyone put in right? Also, if everyone ranks candidate $A$ above candidate $B$, then in the final ranking, $A$ should be greater than $B$. These conditions are called the Pareto condition and independence of irrelevant alternatives, respectively. We could also interpret indepedence of irrelevant alternatives as such: if we add a third candidate, it won't change the position of the other two candidates. Let's define these conditions mathematically:
Definition: A voting system satisfies the Pareto condition (unanimity) if given that an alternative $a$ is strictly greater than $b$ for all total orderings $R_1,\cdots,R_N$, then $a$ is strictly greater than $b $ in $F(R_1,\cdots, R_N)$.
Definition: A voting system is said to be independent of irrelevant alternatives (denoted IIA) if for two preference profiles $(R_1,\cdots,R_N)$ and $(S_1,\cdots,S_N)$ such that for all individuals $i$, alternatives $a$ and $b$ have the same order in $R_i$ as in $S_i$, then alternatives $a$ and $b$ have the same order in $F(R_1,\cdots,R_N)$ as in $F(S_1,\cdots,S_N)$.
These conditions seem pretty reasonable right? Let's take a look at what each of our voting systems satisfies.

FPTP 
Borda 
LPTP 
Dictatorship 
Pareto? 
Yes, if everyone chooses the same person he will have the most votes and win 
Yes, for the same reason as FPTP 
No, everyone putting the same guy first will result in him losing 
YES! 
IIA? 
No, introducing a third party may change the results 
No, same reason as FPTP but more severe since the new candidate will be weighted 
No, introducing an irrelevant alternative will probably result in him winning 
YES! 
Do you see a pattern? The only voting system that satisfies both Pareto and IIA is a dictatorship! The natural question to ask then, is "are there any other systems that satisfy both Pareto and IIA"? The answer to that is NO! The only system that satisfies both Pareto and IIA is a dictatorship. This is called Arrow's Impossibility Theorem, proving this shows that there doesn't exist a system that satisfies Pareto, is IIA, and isn't a dictatorship.
Theorem (Arrow's Impossibility Theorem): Assume that $V$ is a voting system with more than two alternatives which satisfies both Pareto and is independent of irrelevant alternatives. Then $V$ is a dictatorship.
Corollary: There exists no voting system that
 has more than two alternatives,
 satisfies the Pareto condition,
 is independent of irrelevant alternatives,
 and is NOT a dictatorship.
I won't get into the proof: you can find it starting from page $4$ here. But proving this theorem says that any voting system you could possibly come up with will fail at least one of the conditions: so in other words, if we want reasonable voting criterion with more than two candidates, no can do! This may be out of character, but here are some practical reasons why other voting systems haven't been implemented:
Instant Runoff
 takes too much time (and therefore money), since one voting cycle is required for each candidate
 requires too much mental energy from each voter, forcing them to make several conscious choices rather than one
Borda count
 once again requires too much mental energy from each voter, forcing them to rank each candidate
 tends to elect people who are somewhat decent according to most people, rather than the best candidate that the majority want
Dictatorship
 objectively the best system, since citizens of a dictatorship fairly chose their supreme leader, unlike flawed systems like FPTP
 voting system of choice in wonderful progressive democracies like North Korea, Libya, and Iraq
 this message was approved by Ali Khamenei (Supreme leader of Iran), Kim Jongun (Supreme leader of North Korea), Mswati III (King of Swaziland), and Chairman Mao
Sat, 07 Nov 2020 14:55
MPV Is The Best Video Player
[link—standalone]
If you haven't heard of mpv yet, now's the time to start hearing about it. Forget VLC, Windows Media Player, QuickTime Player, whatever. mpv is undisputably the best media player, and here's why.
Not Bloated
Both visually and internally. The GUI for VLC only gets in the way, so the folks developing mpv did away with the GUI entirely (almost). It hides itself when not in use to maximize precious screen real estate, so all you see is the video when you load up mpv. This is much cleaner than staring at a play button or some GUI that looks like it was made in 2007.
User Centric
Keyboard based controls, fully customizable. Need I say more?
Works out of the box
It just werks™. Always starts up instantly, not even a millisecond of delay (in other words, not written in Python). It plays almost any type of file with no errors. There's no need to endlessly configure to get it to work.
Youtubedl integration: Watch YouTube videos without YouTube!
To me, this is the most mindblowing feature. If you have youtubedl
installed, mpv can stream videos without opening a browser. All you have to do is prove mpv with the YouTube link (eg mpv youtube.com/s?5jbskjk???
< this is a random url) and mpv will stream it. This eliminates the need to visit YouTube altogether, coupled with an RSS reader. Just watch the videos locally!
You could also run mpv youtube.com/298jfs??? novideo
to replace YouTube as a music streaming service (I still recommend downloading videos with youtubedl
and playing them with mpd+ncmpcpp).
Expanding on the "customizable" bit
You can remap anything to any key in your ~/.config/mpv/input.conf
file, and make general changes in ~/.config/mpv/mpv.conf
. For example, I set mpv to loop gif files and mapped "j" and "k" to seek backwards and forwards, respectively. I also have Newsboat open YouTube videos in my RSS feed with ,v
. You can find my mpv config files here, as well as my newsboat config for streaming YouTube videos locally).
Tue, 29 Sep 2020 08:41
Running Log: In Each Shave Lies a Philosophy
[link—standalone]
Date 
Location 
Time 
Distance 
9/15, 3:35 PM 
Clark Field 
25 min 
4.5 km 
"Somerset Maugham once wrote that in each shave lies a philosophy. I couldn't agree more."
–Haruki Murakami
One of my favorite books is "What I Talk About When I Talk About Running" by Haruki Murakami. It's not a bestseller or pageturner, but the charm is in the mundaneness of the book, it's an account of his running endeavors as well as musings on life. Murakami writes that "if my life were turned into a movie, this would be the episode that editors would likely leave out. It's not bad, but it's ordinary and doesn't amount to much. But for me, these memories are meaningful and valuable."
It's a fitting theme with the subject of the book, which is running. Such a mundane activity can hold so much meaning for people. I'm not going to get into my philosophy/running journey or anything like that, but I thought posts like this would be a nice place to record my thoughts after running. Which is what Murakami was doing, except he published his thoughts in a book.
I have to say I didn't choose the best time: to me this kind of stuff shoudn't matter too much. But afternoon in MidSeptember really beats down on your mental spirit, combined with the monotony of a track, seeing the same pole over and over again. Not like I'm complaining or anything: when times get rough and you want to quit at mile 1, I always think of David Goggins running with four layers of clothes in the desert to train his heat resistance. Then I can always squeeze out another couple hundred meters or two.
When I run or see people run, I always think of the same question. Are you running from something, or are you running toward something? It sounds like a deep philosophical thing, but it's just curiousity— and the answer depends on the day. Seven months ago, I was running toward the finish line in Galveston. Today, I'm running from my pile of homework and reading that I put off over the weekend.
If I were writing a memoir, I would have to think of less fitting ways to end things: all I would have to do is cram the running logs into one, and only write one chapter ending. But alas, this is not the case. Until the next time I run (and write).
Tue, 15 Sep 2020 16:06
Things Won't Make You Happy
[link—standalone]
So I'm big on minimalism— I know it's kind of a meme on the internet. I think there's a lot more to it than black clothing and Instagram photos of empty rooms. Minimalism is the radical response to the rampant consumerism encouraged by corporations, and a rejection of the idea that the set of objects you have is in bijection with how happy you are. To be a minimalist is to embrace boredom and shun consumption.
For most people, minimalism starts with humble beginnings, thoughts like "My desk is too messy!", "Why is my closet always full? I never wear 80% of these clothes", and "I would follow my dream and move to X city, but I have so much stuff and moving it all is going to be a huge pain." (I never struggle with these things: it takes about 30 minutes to load everything I own into a car, with plenty of room to spare). The reason why is simple, every object you own takes up mental space in your brain, like a tenant that doesn't pay rent but nags at you all the time. Every time you walk through a cluttered house, each unused object is reminding you of the fact that you paid money for it to sit there and do nothing. This mental fatigue adds up over the hundreds and thousands of times you notice a useless thing^{[1]}.
Another example of what brings someone to minimalism is social media. Companies like Facebook and Google are employing tons and tons of techniques to ensure that all you care about is the next hit of dopamine. For example, for the average city dweller, the last time they immediately started the day by checking their notifications for 10 minutes (or hours) and spent the last hour before sleeping in bed on their phone was probably not too long ago. In a sense, sleep is the intermediary for the phone, always looking for the next "thing" or the "things" they missed (of course, in reality they are never missing out on anything that will significantly impact their life positively, on the contrary, they are missing out by using their phones). This is not their fault. Once again, this is not their fault. Billions of dollars have been invested to have engineers make this outcome inevitable, to turn our society into mindless zombies waiting for the next "thing" to happen, whether it be a like on their posts, a new message from somebody, or simply just new content to be consumed in an infinite scrolling feed ^{[2]}.
Finally, the American Dream. This is the epitome of destructive consumerism: that the ultimate goal in life is to buy cool stuff, a fancy car, a nice house. Then you'll finally be happy and free. After the idea of always wanting more stuff was ingrained by our society, after you get this one thing you'll finally be satisfied, I swear. Why are the jobs we deem successful (CEO's, Businessmen, Financial Analysts, Silicon Valley, Wall Street, etc) all based off stepping on others to increase their ranking in the wellordering of the set of human beings by net worth? These people who manage corporations propagate and shove consumerism down our throats, for example, brand loyalty. Why would anyone identify themselves with Large Corporation No. 7 because they make cooler stuff than Large Corporation No. 16? (Eg: Apple phones, Branded clothing, Cars). It's because these people have normalized it, for the purpose of also owning more things (money).
These examples gives to a radical reaction that embodies itself in the movement known as minimalism. What's the solution to always being bombarded by notifications and group chats messages? Delete social media. What about my old books that I will never read again but like having on my shelf to show people I'm cultured? Recycle them or donate them (I like giving out old books I've read to friends). The answer to always being told that things are important and life is all about things is to reject things altogether.
How to do this is by radically reducing the amount of "things" you own down to the bare essentials, as a form of rebellion. But honestly, the inconvenience this causes is minimal. For example, not having a browser on my phone has only caused me trouble when I wanted to nagivate somewhere. Throwing away people's well intended presents has strained no relationships. Sentimental items only weigh in on the mental fatigue (as seen in the second paragraph), and only owning 4 Tshirts just means I have to do the laundry twice a week. This shows that minimalists aren't just insane ascetics who hate pleasure, it's just about being intentional about where it comes from. However you will be bored: this is normal, boredom is a sign of mental growth. Without boredom and solitude, there is no outlet for the brain to produce meaningful work (what if Dali spent all day scrolling through Instagram instead of heading the Surrealist movement?) Don't worry about missing out, because to be honest, the next new thing quite frankly is useless and won't make you happy.
References/Further Reading:
1: To read more, check out Fumio Sasaki's Goodbye, Things.
2: Also check out Cal Newport's Digital Minimalism.
Sun, 13 Sep 2020 19:22
Bike
[link—standalone]
This is possibly the most descriptive title I've ever written!
I've always an interest toward cycling. After reading Murakami's "What I Talk About When I Talk About Running", the urge to compete in a triathlon was awakened within me: I was so close too! I ran a marathon in February 2020, check. Swam for a couple years, works as a lifeguard: check. The only thing that was left was cycling. Almost like a reminder, my English professor at TAMS last semester was an avid cyclist, always talking about her 60+ mile rides (I listened with a sort of understanding of the "bonk", and envy).
(Also: I enjoyed her musings on Atwoods "The Handmaid's Tale". Is this the best of all possible worlds? ... yes I said yes I will Yes.)
I felt as if the only thing I was missing for me to get into cycling was a decent bike: buying a thousand plus dollar bike just seems like extreme consumerism to me, but there was no way I could ride on a road with my slow bike (that had a basket attached to the front! I liked it for riding around campus, but it just wasn't suitable for racing). Almost as if a miracle, I stumbled upon a very, very old, racing bike.
It comes all the way from Ohio, where my brother purchased it at a yard sale (for $5) and fixed it up a little bit before I took it back to Texas. It's a Schwinn World Sport, I believe my model is from 1968: they say it was a popular powerhouse road bike back in the day. I've spent the last couple of weeks trying to fix up the bike: some of the thing's I've dealt with include
 Bent Derailleur Hanger
 Inner Tube broke
 Tires too wide (chafing on frame)
 Brake Pads out of alignment
 Seat feels like concrete
 Attaching a water bottle holder
 Breaking (removing) the chain
I met with our local bike expert at UT (through a church friend), and once again realize I have a lot to learn about fixing things. Although, this process of stuff constantly breaking and getting your hands dirty fixing it is oddly familiar— it reminds me of when I first installed a Linux distribution, always going through the manual and source code, trying to find the source of all these errors.
In a sense, it's a sign that the thing you're working on is truly yours: as you fix and learn more, you become less reliant on harmful foolproof software/mechanisms, and more autonomous in general. An example: it took me an hour (with the aid of my brother) to learn to how to change out the engine oil in my car. This one hour of work, that was arguably pretty fun, will save me 30 dollars every 6 months by avoiding the mechanic (and will also make me less dependent on others to fix my stuff).
The bike's still busted, but I'm gonna keep working at it. One day I'll be out there with the cool cyclists, going on triple digit rides and seeing everything under the sun (who knows when this day will be)!
Sat, 12 Sep 2020 17:59
My Experience With Graduate vs. Undergraduate Level Courses
[link—standalone]
Back when I went to TAMS, I took "graduate" level coursework before, which had concurrent enrollment for both undergraduates and graduate students (each having their own course number, eg Math 4500 for undegraduates and Math 5600 for graduate students). To be honest, they weren't very difficult. It felt like an accelerated undergraduate course, but still an undergraduate course. The pace was above average but not unmanageable, and the homework was very reasonable. I thought I was well prepared for graduate level mathematics, but unfortunately I was not.
People always told me I was insane for doing so much math. It started back when I was a wee child in 9th grade, who was "insane" for trying to test out of the (oh so horribly formidable) subject of PreCalculus over the summer. In fact, my Algebra II teacher said she was "looking forward to seeing me earn a C or possibly fail" in Calculus, which was pretty much my driving force to succeed in 10th grade AP Calculus BC. (I got a B.)
Then I was insane for trying to learn Real Analysis over a summer without having touched proofs before. Sure, I admit I was a little off my rocker when I signed up for this one. But I persevered, wrote a college essay on it, and started my senior year at TAMS in both Abstract Algebra and Real Analysis II (strongly against the recommendations of my Academic Advisor, who is a wonderful person by the way).
After acing both courses, I was out of my mind for trying to take Topology (and Abstract Algebra II). I was advised by many people, some even physicists and math majors, that Topology was the hardest undergraduate course UNT had to offer. I was warned of problem sets that would take 23 hours per question (sounds like Harvard's 55a), and horror stories of those who failed or dropped out of TAMS due to the class. (Galois theory was difficult too, but no one had taken it). I was told even graduate students struggled with the class!
Now the courses were nontrivial of course, but nowhere near the level of challenge that others made them out to be. It built up a sort of false confidence, a mentality that "I can do anything! (With a nontrivial but nowhere near backbreaking amount of effort)." Because to be honest, none of these courses required an insane workload: they all built upon each other, clearly defined everything from the beginning, and if one paid attention and did all the homework and readings, weren't very hard to do well in.
This marks the first week of my graduate Algebraic Topology course, and I can safely say it's the hardest thing I've ever attempted in my entire life, and BY FAR. I've spent hours and hours poring over textbook pages, going through a definiton lookup chain, grinding through homework problems, just to stay afloat. It moves at the speed of light: we defined the interval in one minute, and five minutes later, we're talking about the fundamental group of simply connected spaces.
Before class even started, we were assigned a prehomework that asked five question on Manifolds and CW Complexes each. I've never seen a Manifold or CW Complex in my life before (at that point, I doubted whether I even learned Topology at all)! I worked on it for hours every day, looking up so many definitions and reviewing so many semesters of work, for maybe about 10 hours total, and I managed to solve ... 2.5 problems! Homework 1 was a similar experience. I worked on it for 34 hours every day, 6 days of the week, and managed to solve about 6 problems out of 10. It feels like I'm taking Math 55a (looking at the problem sets, they're about similar difficulty) without any of the freshman struggling with me, study groups, and tight community (thanks corona).
I don't really know what I'm trying to say anymore, maybe I'm just ranting. But let this act as a word of caution to qualified undergraduates attempting to register for graduate courses: expect those 3 credit hours to take up 30+ hours of work per week (or equivalently, the same or more as the rest of your undergraduate classes). There's a reason 9 hours is full time for grad students. Of course, I'm going to make it through. I signed up for this course to push my mental boundaries, and now I'm complaining that it's working as intended, isn't that ridiculous? It's going to be a long, very long, hard fight, but in the end I'll stumble out of the ring, face covered in blood, holding up a shaky but triumphant fist.
I was planning on adding a cheesy motivational segment here, but I'm not a good motivational speaker (it would probably have the opposite effect) so I'll leave the reader to find the motivation themselves as an exercise. Or don't, motivation is fickle anyways. Godspeed.
Sat, 05 Sep 2020 21:27
Unblock Port 25: Getting Schooled by a sysadmin
[link—standalone]
If you didn't know, I host my own mail server here at mail.simonxiang.xyz (don't try to type that into your browser). Although setting it up is usually a long and forlorn process (or so I've heard), for me it was actually quite simple. There's a YouTuber out there that goes by Luke Smith that wrote a nice script to easily set up mail on your new server. If you just read the README
and make sure you have all the requirements set up on your domain registrar and VPS provider, it's a two minute process to run the script and be on your merry way with a fancy new email server.
However, after setting it up I couldn't send mail for some strange reason. After testing two possibilities (domain on a spam list, postfix configuration issue), my cocky self decided that I didn't mess up anywhere when installing and that my VPS provider, Vultr, had blocked the port that allows me to send email (port 25) so I couldn't use their VPS to send spam emails or something like that. So I sent them a slightly condescending support ticket that went something like this:
Hello Vultr,
Can you unblock Port 25? I know y'all are blocking it because you think I'll send spam emails or something, but I promise I wont.
Regards,
Simon
In 57 seconds, a system administrator at Vultr responded with this (not verbatim):
Hello Simon,
We're not blocking port 25 access. Check your instances firewall rules and listening services, you can do this by running netstat uplnt  less
. Furthermore, also run iptables L
for firewall rules, and iptables F
to flush rules from the chain: to test outbound port 25 access, run telnet portquiz.net 25
. Thanks,
[omitted for privacy],
Systems Administrator
After spending the last two months learning how to use Linux and setting up my own server, I thought I knew a lot of commands (I have a terminal "cheat sheet" that's 250+ lines long), and a lot about how Linux works in general.
Well, after reading his response, I felt like I knew nothing. It was a good return to reality: Ego is something that creeps into everything (especially when things are going well), and ego leads to complacency. Complacency leads to the slow decline of the quality of work you put out, whether it be personal, academic, or job related. Basically, it's good for us to feel like complete idiots sometimes— it's how we get better.
(After this exchange, I added the three new commands he taught me and some more to a "systems administration" section of my terminal commands cheat sheet).
Anyways, that's it for now. We should all follow along with this motto (spoken by a very wise person):
"I don't know everything, I just know what I know."
–HT
Fri, 28 Aug 2020 21:58
Searx is a Completely Private Search Engine
[link—standalone]
There's a saying that "Google knows you better than you know yourself" (it probably isn't a saying, but I just said it, so it is). It's well known that search engines like Google build profiles about you from your searches. You may forget about the time you were reading Wikipedia articles about the SCP foundation or the Liberian Civil Wars, but Google never forgets. They remember every search since yesterday, a year ago, ten years ago: storing more information about you as a person than you ever could.
Innocuous questions drastically narrow down targets and provide a lot more personally identifiable information than you think: searching for a local restaurant gives away your location down to a few neighborhoods, your relationship status (depending on the type of restaurant), maybe narrows down the age range. A Chegg query instantly narrows down (almost certainly) the age range to about 1624 years old. You're probably searching the internet (via Google) about all the hobbies and interests you have, and of course Google knows who you hang out with and what you do with your free time.
How do we privatize search engines to not profile you? An alternative would be DuckDuckGo and/or Startpage, but these services have their flaws: none of them are open source, meaning we just have to take their word on it that they aren't harvesting our data. The only way to be completely sure that you have the blinds down with search engines is the standard solution to everything: host it yourself.
Enter Searx. Searx is a completely opensource meta search engine: since it's open source, we know completely what it's up to. The "meta search" part means it anonymously runs your queries through several search engines and aggregates the results (eliminating the need to create your own search algorithm). How they anonymize your queries is by not sending cookies and generating a new browser profile for each session. This way, you can get your results from the sites you love (Google, Reddit, DuckDuckGo) without any of the tracking you hate.
In essence, Searx itself it just a piece of software that is activated and controlled by whoever is hosting it. So the only true way to be completely private is to host your own instance of it. However you could also use a public instance, which means you're relying on the administrator to not do anything malicious with your data. I host a public instance of Searx on my server, you can find it here or by clicking on the "search" link below my name on the homepage of this site. Since it's hosted on my server, it doesn't stop me from logging all your data. You just have to trust me on this one (just kidding, see the footnote on the main page).
Tue, 25 Aug 2020 20:57
Destroying my Computer or: How I Learned to Stop Worrying and Love Linux
[link—standalone]
So for the last two weeks I haven't really been online, due to the fact that I messed up my computer beyond repair. For some reason it brings a strange sense of comfort that I was the one that bricked it, as opposed to Microsoft or something. Yay for user freedoms!
This is what happened— Google was being annoying as usual and sent me my photos with each day having its own directory or something like that, preventing me from using my file manager to quickly live preview the photos. So a lightbulb went off in my head: why not write a script to move every file out of the directories and into the main one! It went something like this:
for d a directory; do
cd /. && mv *.* ../
done
or something dumb like that (not sure if this was the actual syntax). Please don't run this on your home computer.
So it ended up moving every file out of its original directory up one level: I didn't realize this at first and ran it again with sudo to get rid of the error messages (big mistake!). As soon as I did this I lost control of my keyboard and it hit me that I was kind of screwed.
The /boot
directory was gone, making it extremely difficult for me to go in with grub
and rescue at least some files. It makes sense, because everything in a UNIX system is a file. Some of the error messages I got as I attempted to save my system were truly horrific: "Kernel panic  not syncing: Attemped to kill initf
", "Trying to continue (this will most likely fail) ...
", and my personal favorite,
ERROR: Failed to mount the root device.
Bailing out, you are on your own. Good luck.
Absolutely terrifying.
I've spent the last two weeks or so offline. After spending the last two months actively installing my system, customizing my dotfiles, getting programs to work with each other, hosting software on my server, writing blog posts, etc, I decided to take a break.
At first I was kind of stressed out: "What about my dotfiles? My LaTeX notes? How will I keep my streak of green on GitHub? What am I going to do in my free time?" It was like quitting social media all over again. It's the fear of missing out: the fear that something online is happening and you're not part of it. It's also the fear of boredom: in a generation of cell phones and constant stimulation, there's no one thing that collectively scares us more than being bored.
In the end, I learned to stop worrying. No matter what your brain says, you really aren't missing out on anything: rather, spending more time online means missing out on more of life. I had a lot of fun doing things like working on my bike, meeting up with friends at parks, learning to write again with the proper grip.
On boredom: Boredom is the driving force that spurs action, without boredom there is no progress. Solitude and boredom are concepts essential to our mental development that have been washed away by the passage of the Internet. As time went on, I slowly began to dread the day I would reinstall Arch and rejoin the virtual world.
As you can see, I've managed to reinstall my system and get access to my server again. It took about a week to get everything working. This time I documented all the commands I ran to set up all the software/fix all the bugs in a text file, so if I ever were to do this again, it would be as painless as 123. Well, that's about it for now— I've also moved into campus, so if anybody reading this is from UT Austin, let's get in touch. Cheers!
Tue, 25 Aug 2020 07:50
Math is Hard Because Our Short Term Memory Sucks
[link—standalone]
I just came back from attending the 1052nd AMS (sectional) meeting at Penn State, last weekend, and realized that the Kingdom of Mathematics is dead. Instead we have a disjoint union of narrow specialties, and people who know everything about nothing, and nothing about anything (except their very narrow acre). Not only do they know nothing besides their narrow expertise, they don't care!
–Doron Zeilberger, Oct. 28, 2009
Here's a hot take: all the ideas we describe in math are relatively easy to grasp, but mathematicians make things complicated because of our poor short term memory. To give an example, let's take Calculus— the ideas behind derivatives/integration are really simple (limit of sums, limit of slopes) but we make them difficult by introducing levels of rigor and notation that leave a first year Calculus student behind in the dust.
This applies to higher level concepts as well: I believe that groups, fields, metric spaces, topological spaces, etc are fundamentally simple, but they suddenly aren't because of the level of rigor needed to formally approach these topics.
Rigor arises as a process to justify logical reason, but if our long term memory capacity was higher and we remembered much more of concepts learnt earlier, then there would be no need to obfuscate math with weird symbols and definitions. We add the rigor to remind ourselves that our reasoning related to past subjects is correct, which leads to rigor being required to have further areas of study utilize the current topic.
Of course, without rigor there would be no certainty, which is pretty much the point of math itself. So I guess we wouldn't be studying math, it would just be another branch of science, or maybe even just pseudoscience. On the flip side, solving this issue could possibly lead to the Pre20th century level of unification in mathematics, one that hasn't been seen since David Hilbert.
Ever since the 20th century, mathematics has become so vast and fractured that specialization is the only way to survive in the field. No one can claim to truly understand more than a couple of hyperspecialized fields anymore due to the immense time and effort it takes to read enough papers to become knowledgable in a field. To quote Doron Zeilberger once again, we no longer have "mathematicians", but instead we have "topological algebraic Lie theorists, algebraic analytic number theorists, pseudospectral graph theorists etc".
This is, of course, not the fault of our mathematicians, but a natural consequence of the direction the field has been going, (and I argue) as well as our natural short term memory capacity. In an ideal world where our brains were constructed differently, we would have mathematicians consulting and working with a plethora of other mathematicians of topics at the highest level, research and developments realidly accessible to the layman that puts in 2 weeks as opposed to 20 years to understand the work, and a rapid flow of major breakthroughs (after all, great proofs like the proof of Fermat's Last Theorem utilized the hyperspecialized work of many many fields).
But of course, this is just wishful thinking and entertaining a fantasy "whatif" situation. Or maybe it's just BS, and if we truly could remember more, mathematics would still end up the way it is to preserve the aspect of exclusivisity and prestige. However, it's something interesting to think about: if we were given a choice between psuedoscience and progress as opposed to rigor and fragmentation, which would mathematicians pick?
References: https://sites.math.rutgers.edu/~zeilberg/Opinion104.html
Fri, 07 Aug 2020 19:20
Linear Algebra and Its Applications, Part 2: Derivatives
[link—standalone]
I mentioned in my introduction post that this series would probably end up being about the applications of Linear Algebra to other fields of math or something. Well it's the first post, and we already stopped talking about real life applications! Whoops.
Consider the vector space with basis
\[
\mathscr{B} = \{ 1, x, x^2, ... , x^n \}
\]
over $\mathbb{R}.$ This is known as the vector space of polynomials with real coefficients of degree n or less, denoted by $\mathbb{R}[x]$ (some texts may use $P_n)$. If it looks familiar, it probably showed up as a frequent example in your Linear Algebra problem sets (and is of extreme importance over an arbitrary field in Galois Theory)!
Note: while in Algebra the PID $\mathbb{F}[x]$ for $\mathbb{F}$ a field contains infinite degree polynomials, in this case we will assume $\mathbb{R}[x]$ to have finite degree polynomials with maximum degree $n$.
We can write an arbitrary element of any vector space as a linear combination of the elements of its basis set. In this case, an element of $\mathbb{R}[x]$ looks like a polynomial. A linear combination of the elements of $\mathscr{B}$ is of the form
\[
a + a_1x + a_2x^2 + ... a_nx^n.
\]
We can use $f(x)$ and $g(x)$ to denote such linear combinations with shorthand $\sum_{i=0}^{n} a_ix^i$ for $a_i \in \mathbb{R}$. Note that $x$ isn't actually a variable: we haven't defined a way to evaluate $x$ and it doesn't change based off the input. (If you're wondering how we evaluate polynomials in the traditional sense, we use something called the evaluation homomorphism).
We can use this vector space to do some cool things, like take ideas from Calculus and express them in the language of Linear Algebra! Remember the derivative from Calculus: in this case it's a map
$\frac{d}{dx} : \mathbb{R}[x] \to \mathbb{R}[x]$ such that
\[\frac{d}{dx}\left(a+a_1x+...+a_nx^n\right)=a_1+2a_2x+...+na_nx^{n1}.\]
In summation notation, it's saying that
\[\frac{d}{dx}\sum_{i=0}^{n} a_ix^i = \sum_{i=0}^{n1} (i+1)a_{i+1}x^{i}.\]
Using the standard notations for derivatives, we can write $\frac{d}{dx}f(x)=f'(x)$ for $f(x)\in\mathbb{R}[x].$
We can show that the map $\frac{d}{dx} : \mathbb{R}[x] \to \mathbb{R}[x]$ is linear. Recall that the conditions for a map $T: V \to V$ to be linear are that
\[
\text{1:}\,\,T(v_1+v_2)=T(v_1)+T(v_2)
\]
\[
\text{2:}\,\,T(\alpha v) = \alpha T(v)
\]
for a vector space $V$ over a field $\mathbb{F}, v_i \in V, \alpha \in \mathbb{F}.$ We know that
\[
\frac{d}{dx}\left(f(x) + g(x)\right) = \frac{d}{dx}f(x) + \frac{d}{dx}g(x)
\]
for $f(x), g(x) \in \mathbb{R}[x],$ satisfying the first condition. Next,
\[
\frac{d}{dx}\left( c f(x) \right) = c \frac{d}{dx}f(x)
\]
for $c \in \mathbb{R}, f(x) \in \mathbb{R}[x],$ so $\frac{d}{dx}$ is linear.
Furthermore, if a map from a vector space onto itself is linear, we can say it's a linear transformation. We can represent such transformations with a matrix. To find the matrix representation of a linear transformation, examine the column vectors that arise from the image of the basis set under such linear transformation. We denote this matrix as $[T]_B$ for a vector space $T$ with basis set $B$, so for the derivative operator we would denote the image of $\mathscr{B}$ under $\frac{d}{dx}$ as $[\frac{d}{dx}]_{\mathscr{B}}$.
To find the first column vector, examine the image of $1$ under $\frac{d}{dx}$: clearly it vanishes since constants don't change. So
$
[1]_{\frac{d}{dx}}=
\Bigg[\begin{smallmatrix}
0 \\
\vdots \\
0 \\
\end{smallmatrix}\Bigg]
$
Similarly, we have
\[ [x]_{\frac{d}{dx}} =
\begin{bmatrix}
1 \\
0 \\
\vdots \\
0 \\
\end{bmatrix}, \,[x^2]_{\frac{d}{dx}} =
\begin{bmatrix}
0 \\
2 \\
0 \\
\vdots \\
0 \\
\end{bmatrix}, \,[x^3]_{\frac{d}{dx}} =
\begin{bmatrix}
0 \\
0 \\
3 \\
0 \\
\vdots \\
0 \\
\end{bmatrix},
\]
and so on. Intuitively, this is because $\frac{d}{dx}x=1, \frac{d}{dx}x^2 = 2x, \frac{d}{dx}x^3=3x^2,$ etc, and we represent those values with the vectors above if we write them as a linear transformation of the elements of $\mathscr{B}$ (e.g. $2x=0\cdot 1+2x+0x^2 + \cdots$, $3x^2=0\cdot 1+0x+3x^2+0x^3+ \cdots).$
If we combine all these column vectors, we can find a matrix representation of the derivative! Here it is in all its glory:
\[
[\frac{d}{dx}]_{\mathscr{B}} =
\begin{pmatrix}
0 & 1 & 0 & 0 & \cdots & 0 \\
0 & 0 & 2 & 0 & \cdots & 0 \\
0 & 0 & 0 & 3 & \cdots & 0 \\
0 & 0 & 0 & 0 & \ddots & \vdots \\
\vdots & \vdots & \vdots & \vdots & \ddots & n \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{pmatrix},
\]
where $\dim (\mathbb{R}[x]) = n+1$ (or $\mathbb{R}[x]$ having polynomials of a maximum degree $n$). You can multiply this matrix by some polynomial vectors in your free time to see if this really works.
Furthermore, the derivative matrix has an interesting algebraic property in that it's nilpotent. A nilpotent matrix is defined as one that eventually vanishes when multiplied by itself, that is, there exists some $k \in \mathbb{Z}$ such that
\[
A^k = 0,
\]
where $A$ is a matrix and $0$ denotes the zero matrix.
I won't offer a proof, but this is an intuitive result: recall from Analysis that no polynomial of finite degree is infinitely differentiable. So an arbitrary polynomial of degree $n$ will vanish if differentiated $n+1$ times, or in other words, multiplied by the matrix $\frac{d}{dx}^{n+1}$ (in Liebniz notation this would be denoted as $\frac{d^{n+1}}{dx^{n+1}}$). So clearly $\frac{d^{n+1}}{dx^{n+1}}$ corresponds to the zero matrix, and $k=n+1.$
Wed, 05 Aug 2020 21:50
Algebra vs. Analysis: How do you eat your corn on the cob?
[link—standalone]
There's an interesting discussion about relating your mathematical field of study to the way you eat corn on the cob. This sounds ridiculous, but if you look closer, it's actually very interesting. Here's the hypothesis: the algebraists will eat their corn in rows, whereas analysts go in spirals.
Of course, this isn't a theorem because we have no way to prove this. It's also nontrivial to partition the set of mathematicians by the "algebra" or "analysis" equivalence classes (what of the mathematician that studies algebraic topology?). However, for some reason, this rule seems to hold in most cases. Regarding the mathematicians I talked to in real life, the one who wrote his PhD on Graph Theory eats his corn in rows, whereas the one who does research in Measure Theory eats his corn in spirals.
Personally, I liked my Algebra courses much more than my Analysis ones— I found them much more enlightening, intuitive, and interesting. People rave about the beauty of Analysis proofs, but I just saw them as confusing (perhaps this is due to the fact that Real Analysis was my first proof based course). The strange "theorem" about corn on the cob didn't really click with me until it became personal: after some experimentation, it turns out I eat my corn in rows.
There isn't going to be a definitive answer as to why this happens, but we can guess. Here's my proposition: Algebra is all about analyzing structure, which is why algebraists will see the perfectly laid out rows and follow them. Analysis is about finding patterns, which is why analysts will seek out the spiral patterns and follow those instead.
References: https://bentilly.blogspot.com/2010/08/analysisvsalgebrapredictseating.html
Mon, 03 Aug 2020 16:27
[link—standalone]
The best way to keep up with what I'm doing is to periodically check for updates, much like how you periodically run pacman Syu
on your Arch Linux machine. However, the more people you keep up with and the more individual websites you have to visit, the more of a pain this process becomes. Enter RSS feeds.
Why Use RSS?
RSS feeds functions like social media without the bad parts. Rather than having information centralized in one place (and therefore susceptible to being controlled), all you have to do is add a bunch of RSS feeds (urls) to your RSS reader of choice, and it will aggregate the content for you for easy viewing. This way, you don't have to individually check a bunch of sites, all you have to do is refresh your RSS reader and it will pull all the updates to the feeds you've subscribed to automatically.
Of course, it's completely private since all RSS readers do is check for updates to a certain url on someone's page. You can use RSS feeds to replace all sorts of things! For example, you can find RSS feeds for
 Youtube Channels
 Individual Subreddits
 Twitter Feeds
 Public Facebook Pages
 Instagram Feeds
 Github Repository Updates
and of course, blogs! No longer will you have to subject yourself to involuntary surveillance if all you wanted to do was see what someone is up to— which was the original premise of social media anyway, before the mass centralization of internet content.
Recommendations for an RSS Feed Reader
The RSS reader I'm currently using is Newsboat (terminal based, of course), with vim bindings from Luke Smith's config. I'm currently still working on setting everything up, but you can view my progress on GitHub or my Git server. Note that for Newsboat to start, you'll need to add at least one RSS feed to your ~/.config/newsboat/url
file.
Of course, if you're not a fan of terminal based applications, there are graphical RSS readers as well. I haven't used any of them, but the popular choices seem to be Newsflow (Windows) and FeedReader (Unix based systems).
Sat, 25 Jul 2020 23:19
I'm Never Going to Finish Reading Infinite Jest
[link—standalone]
The year is 2157. I'm sitting in a room with a book in my hand. No, the room is the book. If you look closely, you can make out the slight imprint of the faded words "Infinite Jest" lining the walls, miniscule page numbers spanning the contents like hieroglyphs. We turn our attention to the rest of the "room". On the ceiling is emptiness: there is no light, or any hope of one. In the corner lie my other books, covered with a thick layer of dust. I desperately reach for my copy of Murakami's Kafka on the Shore, grasping at anything that can bring me comfort. Maybe Nakata can lead me out of this place, to the entrance stone and into an alternate reality. Actually, at this point, I would even be OK with Johnny Walker. I brush off the layer of dust, only to reveal another layer of dust. I open the book— it's all dust. "Unimportant." David Foster Wallace is staring at me from the ceiling. Of course, I should have known. I take my Sailor 1911 and dip it in the ink from the pages of the room, and use it to erase the middle third of the floor. We are on the 141st iteration of the Cantor Ternary set, and I have made a total of $2^{141}1$ deletions, which follows from a quick proof by induction.
Of course, I am already dead, along with all my descendants, their descendants, and their descendants, all the way down to the $10^{38}$th generation. As you can probably tell by now, the year is not 2157. All $e^{e^{87.5}\ln{(2)}}$ of my descendants are cursing me for the day I decided to pick up a copy of Infinite Jest because I heard that David Foster Wallace structured it like the Sierpinski Gasket and included a twopage footnote on the MVT. I reach for the pen— there is no pen. David Foster Wallace is looking into my soul, which is quite easy to do since my body has long since disintegrated away. I too, look into my soul, and see a fine print lightly embroidered along the edges. "Infinite Jest", it reads. Once again, David Foster Wallace reminds me that there is no hope. As I turn of a page of the wall, a small television in the corner of the ceiling flickers and updates. The television is watching my every move. "40%", it reads. I resign myself to my fate and erase another third of the floor.
Fri, 24 Jul 2020 22:25
Linear Algebra and Its Applications, Part 1: An Introduction
[link—standalone]
Yes, the title refers to that one horrid textbook that everybody uses. I hope that reading the title brought back some fond memories of your first Linear Algebra course!
Welcome to my new series on the many applications of Linear Algebra! I'm not exactly sure which applications I'll cover or how many just yet, but for now I'll just go with the flow. I'm also not much of an applied mathematician, so I'll probably end up writing about the applications of Linear Algebra in other fields of math or something like that.
Linear Algebra pops up seemingly everywhere— if we rephrase "systems of linear equations" as "many equations that seem somewhat straight if you zoom in", it becomes clear why. Here's an inspirational quote:
"Mathematics is about turning difficult problems into Linear Algebra problems."
–Terence Tao
Actually, it probably wasn't Terence Tao who said that. I probably got the quote wrong too. Anywho, you get the point.
Matrix equations will show up all the time if you work in a STEM field, and about half of modern mathematics is built on the theory of vector spaces. It's a fundamental topic everywhere (and should be taught in high schools!) that deserves some special treatment, and so, the birth of this series. Look forward to my first post on image compression with Singular Value Decomposition!
Thu, 23 Jul 2020 22:04
[link—standalone]
Email Addresses
simon@simonxiang.xyz
simonxiang@utexas.edu
My Public GPG Key
On Cell Phones
If you know me in real life, you probably know my cell phone number or someone that has my cell phone number. The quickest way to contact me is by call. I won't respond to texts quickly because my phone doesn't alert me if someone texts— by nature they convey less important information and are more prone to sucking you into a spiral of distraction. I have many other grievances with text messages that I will not air here. If you do insist on using text, it would be nice if you used Signal though.
On Email
The second best way to contact me is by email, which you can find at the top of this post. I've also attached my public GPG key if you want to send me encrypted emails, you can read my posts on RSA Encryption if you don't know what a GPG key is. If you want to contact me for academic purposes, I've listed my UT email address as well. Be aware that Gmail is the email provider for UT, so don't include anything that you wouldn't put on a Times Square billboard in any email you send to my UT email address.
If you have the time, I would like you to read Donald Knuth's epic stance on email. I too, make an effort to be on the bottom of things— although I may not be as cool as Knuth, doing mathematics still requires intense concentration for long periods of time. The average office worker checks their email once every six minutes. If a mathematician was as easily distracted, they wouldn't be able to prove a single thing all day.
On Social Media
Something I do not have is social media of any kind, including Facebook, Instagram, Reddit, Linkedin, etc. There are a plethora of reasons not to use these services and the reasons to use them are few and far between. Setting the strongest case for not using social media aside (privacy), they cause massive damage in other areas of your life by the concept of instant gratification through dopamine hits.
Everytime you refresh a Facebook page, you're pulling the lever for a slot machine, anticipating the possible arrival of a shiny new burst of dopamine. Your phone will light up at every new like, follow, or text message, all bringing successive and random hits of dopamine. This conditions you to check your phone as often as possible in order to maximize the chance of receiving the next hit sooner. As expected, this is disastrous for our brains and our ability to concentrate and produce meaningful work. If you want to read more on this topic, I highly suggest reading Digital Minimalism by Cal Newport.
In Short
My email is at the top of the page, and if you know me you most likely have my phone number. Response times will be slow, but it's not a personal thing— if I respond quickly, I'm probably not doing what I'm supposed to be doing. I don't use social media and you shouldn't either— hating social media isn't a boomer exclusive mindset. Finally, thanks for reading this and have a nice day.
Mon, 20 Jul 2020 22:14
Digital Privacy is Like the Blinds on Your Windows
[link—standalone]
When I'm at home, I prefer to keep my blinds down, unless the Sun is hitting my window at a good angle. It gives me a sense of security and calm to know that I can do whatever I want without the possibility of anyone scrutinizing my actions. I do things like reading books and taking showers behind the safety of my blinds— it's not like these are inherently immoral acts that need to be kept secret, thus justifying my use of blinds. I just like to be left alone when I'm minding my own business.
Some people like keeping their blinds open. They roll them all the way down and proudly declare "I have nothing to hide! Why should I keep my blinds down?" They tell me that trying to keep your blinds down is a waste of time, since you've had them open your entire life anyway. Besides, what's the point of lowering the blinds when you're already being watched? They have a point— their house is made of glass. Everything is transparent from the outside, all the way to their bathroom doors on the second floor.
Of course, it is not their fault for buying a transparent house. The three real estate giants, known as Goggle, Pear, and Megasoft have been pushing to make transparent houses an industry standard for years. They market the houses as the "homes of the future", appealing to the consumer using flashy features like "Fingerprint unlock the front door!", "Higher resolution security camera!", and "Better integration with our Smart typewriter and fan!" Of course, they designed the typewriter and fan to fail lest you not have a glass house.
At this point, buying a glass house has been normalized. People are waiving away their right to keep the blinds down left and right, for the sake of faster popcorn delivery by drone and convenience. This is not their fault either— the Big Three real estate companies have made it very difficult to find out what you are actually agreeing to when you sign the contract for your new glass house. On the contrary, it is very easy to sign the contract itself, almost too easy, in fact.
You didn't know about the microphone planted in your car, the cameras in the corners of the rooms. The smart fridge sends a report of your weekly groceries to Megasoft. The chair is analyzing the weight of each person that sits on it and sends it over to Goggle for "analytics". Of course, all the electronics are always on and recording by default. You have the option to turn them off individually, but there are so many of them that this is a gargantuan task. You don't even have the option to remove them from your house completely. Of course, some have tried prying them from the walls, only to see them reinstalled following the next mandatory "maintanence update". You don't choose when the maintanence updates happen either— the mechanics force open the door, kick you out, and make you sit on your front lawn until they're done.
Goggle, Pear, and Megasoft get away with this by not releasing the blueprints to the homes they design. Nobody can really prove the extent that the Big Three spy on us, because their blueprints are private and closely guarded. They have the ability to implement whatever kind of tracking device they want at the drop of a hat, because you don't have the ability to look into the inner workings of your house and decide whether you like what they are doing or not. You signed that right away when you signed up to use the house.
Oh, I almost forgot about the tracking device by Pear that is attached to your person at all times. On average, people will spend 5 hours of their day looking at the device, because Pear designed the device to be addictive. It serves the same purpose as an ankle bracelet attached to prisoners, but it sends a little bit more than just your location. It sends your conversations with friends, sleeping hours, personal library, video game data, everything— it sends it all to Pear. Not only is it virtually impossible to disable the tracking "services", most people actually signed up for this one willingly.
Privacy is a fundamental right, not a privilege. With each subsequent invasion of our privacy, the possibility of an Orwellian totalitarian surveillance state becomes more and more real. The government is very happy that we continue to sign our privacy rights away, and is very upset when we resist— just look at what the NSA did to Snowden, and what they're trying to do to the Tor developers. Our world is truly screwed the day building brickandmortar houses and closing your blinds becomes illegal. Join me in living in a house that has an opensource blueprint and closing the blinds to stop the peering eyes of The Big Three. Join me in fighting for the right to digital privacy.
Sun, 19 Jul 2020 14:19
Regrettably, This Site Runs Javascript
[link—standalone]
I'm really sad. I had a vision for a completely bloatfree website, but now I'm being plagued by the blight that is Javascript because I wanted pretty equations. Every single time someone loads my rolling blog page, it sends 40 something calls to the external MathJax server to generate the LaTeX equations, slowing down the load time by one or two seconds. Unacceptable! I want my stuff to load instantly all the time.
I've tried to host an instance of MathJax on my server before, but that means installing a bunch of bloat like Node.js on the server end. Plus, I'm not even sure it'll improve response times by that much. Image based solutions look absolutely horrible. Is there any clean solution to writing pretty equations in HTML? If anybody knows of one, please let a poor and struggling mathematician know.
Edit: An additional grievance: \overbrace
and \underbrace
will break MathJax. It doesn't even compile the rest of your document if you have one of these bad boys in your code! Someone please save me.
Sat, 18 Jul 2020 17:21
Proving RSA Encryption: An Application of Group Theory (Part 3: Digital Signatures and Euler's Totient Function)
[link—standalone]
We've finally proved Fermat's Little Theorem and explained some of the machinery behind groups and rings. Let's continue by defining an important function.
Definition (Euler's Totient Function): Let $\varphi: \mathbb{Z}^+ \to \mathbb{Z}^+$ be defined as $\varphi(n) = $ the number of integers less than or equal to $n$ that are relatively prime to $n$ for $n \in \mathbb{Z}^+. \varphi$ is also known as the Euler phifunction.
As you can see, it can be quite hard to procure a general formula for $\varphi(n)$ for all $n$. However, in some cases it is relatively easy— for example, if $n$ is prime, then $\varphi(n)$ is simply $n1.$ Recall from the last post that the Euler phifunction simply defines the order of the multiplicative group of units $\mathbb{Z}_n^*.$ We now state an important theorem:
Theorem (Euler's Theorem): Let $a \in \mathbb{Z}$ be relatively prime to $n,$ that is, $\text{gcd}(a,n)=1.$ Then
\[
a^{\varphi(n)} \equiv 1 \, (\text{mod} \, n),
\]
where $\varphi$ denotes the Euler phifunction, and $n \in \mathbb{Z}^+.$
Proof: Euler's Theorem is equivalent to the following statement: For all $a \in \mathbb{Z}_n^*$, $a^{\varphi(n)} \equiv 1$ (mod $n$). We know from a previous theorem that any element of a finite group raised to the power of the order of the group is $1.$ Since $\mathbb{Z}_n^*=\varphi(n)$, we have
\[
a^{\varphi(n)} \equiv a^{\mathbb{Z}_n^*} \equiv 1 \, (\text{mod} \, n). \quad \boxtimes
\]
Notice that the reasoning behind the proof is almost the exact same as the proof of Fermat's Little Theorem: Indeed, Euler's Theorem is simply a generalization of Fermat's Little Theorem— Fermat's Little Theorem simply describes the case where $n$ is prime.
Let's calculate $\varphi(n)$ for $n=pq$, where $p$ and $q$ are two prime numbers. We want to find the number of integers less than or equal to $n$ that are relatively prime to $n$. First, note that the number of integers less than or equal to $n$ is $n1,$ or $pq1$. We want to subtract the number of integers that have a multiple in common with $n.$ This is what we have so far:
\[
\varphi(n) = (pq  1)  ?  ?
\]
Since $n$ is composed of two prime numbers, this would entail the multiples of $p$ and $q$ (due to the unique factorization of $n$) that are less than $n$, since $n=pq$ is already accounted for. There are $p1$ multiples of $q$ and $q1$ multiples of $p$ that are less than or equal to $n$. This makes sense if you think about $n$ this way. Without loss of generality,
\[
n = pq = q + \cdots + q
\]
$p$ times. Then the distinct numbers
\begin{align}
&1:q \\
&2:q + q \\
&3:q + q + q \\
& \vdots \\
&p1:q + \cdots + q
\end{align}
are all multiples of $q$ that are less than $n$, hence why we stopped at the term $p1$. Our goal is find the number of factors of $q$: then there must be $p1$ factors since the set the set $\{1, 2, 3, \cdots p1 \}$ has $p1$ elements. A similar argument holds for the factors of $p$. We have some more information on how $\varphi(n)$ looks now...
\begin{align}
\varphi(n) &= (pq  1)  (p  1)  (q  1) \\
&= pq  p  q + 1 \\
&= p(q1)(q1) \\
&= (p1)(q1).
\end{align}
...Oh yeah, it's all coming together.
Now we finally understand the reasoning behind choosing the seemingly arbitary number $(p1)(q1)$: it's the order of the multiplicative group of integers mod $pq$, which means any element of $\mathbb{Z}_{pq}$ will reduce to 1 if raised to the power of $(p1)(q1)$ modulo $(p1)(q1)$ by Euler's Theorem. Let us finally prove that pesky lemma from the first post.
Lemma: Let $n=pq$, where $p$ and $q$ are two distinct primes. If $a$ is an integer relatively prime to $n$ and $w$ is an integer such that $w \equiv 1 \, \text{mod} \, (p1)(q1)$, then
\[
a^w \equiv a \, (\text{mod} \,n).
\]
Proof: Since $w \equiv 1 \, \text{mod} \,(p1)(q1), w = k(p1)(q1)+1$ for some $k \in \mathbb{Z}$. So
\begin{align}
a^w &= a^{k(p1)(q1)+1} \\
&= a^{(p1)(q1)^k}a \\
&= a(a^{\varphi(n)})^k \\
&\equiv a(1)^k \ \small{\text{by Euler's Theorem}} \\
&= a \, (\text{mod} \, n).
\end{align}
With that, we've finally finished the theory behind RSA Encryption. Let's end this series with a realworld application, follow along on your Linux machines everyone! (Disclaimer: This should work on any machine with GPG installed).
We can use RSA Encryption to digitally sign messages, that is, generate a signature that anybody with my public key can decrypt, but only I can encrypt. Go ahead and grab yourself of a copy of GnuPG, the program we're going to use to create keys and verify signatures. If you're on Linux, just run
sudo pacman S gnupg
or some variant (if you're using Linux, I hope you know how to install packages).
We'll use the same notation as the original post, refer back to it if you need to. Go ahead and download my public key, the message we're going to be verifiying, and the digital signature generated by RSA Encryption.
The algorithm starts by encoding $m$ (hello_world.txt) as an integer. We generate the digital signature by using my private key and letting the singature equal
\[
m^s \, (\text{mod} \, n).
\]
The resultant signature is encoded in the file hello_world.sig above.
If you want to generate a GPG key pair and sign a file of your own, go ahead and open your terminal emulator and run
gpg genkey
and follow the instructions. To generate the digital signature for a file, run
gpg output file.sig detachsig file
and replace
file.sig
with whatever name you want your signature file to be, and
file
with the file you want to sign.
Now since I used my private key to sign this file, you'll need my public key to verify the signature. cd
to the directory where you downloaded my public key, and run
gpg import simonspublickey.gpg
to add my public key to your
keyring.
We verify the signature works by raising it to the power of $r$ (which is in the public key) to yield
\[
(m^s)^r \equiv m^{rs} \equiv m \, (\text{mod} \, n)
\]
by our lemma. Isn't that neat? Basically, it compares both the message $m$ and the signature raised to the power of $r$, denoted $(m^s)^r$, and checks to see if the two match— if they do then great, if they don't, then the signature has been compromised/the message has been tampered with since it was sent, and should not be trusted.
To follow along on your own machine, cd
to the directory you downloaded the files in and run
gpg verify hello_world.sig hello_world.txt
Look at the third line of the output, which should read
gpg: Good signature from "Simon ... " [unknown]
If it does, then we've successfully verified the digital signature using the steps described. We conclude that I am indeed the author of the message, and that it hasn't been tampered with since publication. (You can ignore the "
WARNING: This key is not certified with a trusted signature!
" message, it just means I haven't signed my GPG key yet.)
And that's a wrap! If you want to send me encrypted emails, you have my public key now so feel free to do so. Together, we can use the power of math and encryption to protect our content from the peering eyes of the NSA!
Fri, 17 Jul 2020 18:51
Proving RSA Encryption: An Application of Group Theory (Part 2: Fermat's Little Theorem and Ring Theory)
[link—standalone]
This post is going to be a little heavy on the group theory— I'm going to try to build everything from the ground up but having a prior understanding of the basics of groups will be very useful. Let's start by stating an important theorem:
Theorem (Fermat's Little Theorem): Let $a \in \mathbb{Z}$, $p$ a prime. If $p$ doesn't divide $a$, then $p$ divides $a^{p1}  1$, that is,
\[
a^{p1} \equiv 1 \, (\text{mod} \, p).
\]
Those with a background in number theory may be more familiar with this equivalent statement of the theorem: If $a \in \mathbb{Z}$, then $a^p \equiv a \, (\text{mod} \, p)$ for $p$ a prime.
Proof: I have discovered a truly remarkable proof that is too large to be contained in the margins of this post.
Just kidding. Before we prove Fermat's Little Theorem, let's talk about groups and fields. Here's how groups are defined (I'm going to let addition denote an arbitrary binary operation):
Definition: A group $\langle G,+ \rangle$ is a set $G$ closed under addition such that
$\mathscr{G}_1$ (Associativity): For all $a, b, c \in G,$
\[
(a+b)+c = a+(b+c)
\]
$ \mathscr{G}_2$ (Identity): There exists an element $0 \in G$ such that for all $a \in G$,
\[
0+a=a+0=a.
\]
$ \mathscr{G}_3$ (Inverse): For all $a \in G$, there exists an $a' \in G$ such that
\[
a + a' = a' + a = 0.
\]
The cool thing about groups is that they're very simple yet powerful: as long as any set paired with any binary operation satisfies these axioms, it forms a group, and all the theorems from group theory apply to it. It doesn't just have to be $\mathbb{R}$ with the standard addition, anything from the set of square matrices $(GL(n, \mathbb{C}))$ to the set of permutations on a regular $n$gon $(S_n)$ form a group.
Let's prove a theorem that will come in handy later. Since this post is going to be very technical, I'm not going to explain the proof since I'll have to explain everything that builds onto the main theorem used in the proof, Lagrange's Theorem.
Theorem: Let $G$ be a finite group under multiplication (so 1 is now the identity), and let $G$ denote its cardinality (order). Then for all $a \in G$,
\[
a^{G} = 1.
\]
Proof: By Lagrange's Theorem, the order of the element $a$ (the smallest positive integer $k$ such that $a^k=1$) must divide $G$, that is, $G=kn$ for some $n \in \mathbb{N}$. Then
\[
a^{G}=a^{kn}=(a^k)^n=1^n=1. \quad \boxtimes
\]
Let's take a look at the specific group $\mathbb{Z}_p$, consisting of the set of integers
\[
\{0,1,2, ... , p1\}
\]
for $p$ a prime, paired with the binary operation of modular arithmetic. For example, $5+6$ (mod 7) $\equiv 11$ (mod 7) $\equiv 4$. I'll leave it to you to check that this forms a group. We can actually go further: we can tack on a second binary operation (modular multiplication) and call it a ring.
Rings have many properties, but the important ones are that multiplication is associative, follows the distributive law ($a*(b+c) = a*b + a*c$), and that the ring has an identity for multiplication (there exists an element $1 \in R$ such that $1*a=a*1=a$ for all $a \in R$). Formally, a ring that has an identity element is called a ring with unity.
The thing about rings is that not every element has a multiplicative inverse like groups do. For example, take the element $0$. Since $0+0=0$, $a*0=a*(0+0)=a*0+a*0$ for all $a \in R$. If $a*0=a*0+a*0$, just add the additive inverse of $a*0$ to both sides of the equation to yield $0=a*0+0=a*0$. So $a*0 = 0$ for all $a \in R$. Now if $0$ had a multiplicative inverse, that's saying that there exists some element $0' \in R$ such that $0'*0=1$, which is a contradiction since we would have both $0'*0=0$ and $0'*0=1$. Therefore $0$ cannot have a multiplicative inverse.
Elements of rings that do have multiplicative inverses are called units. We have another handy theorem that lets us classify the units in $\mathbb{Z}_p$! However, I'm going to skip explaning the machinery behind the proof for the same reasons as the first proof.
Theorem: Let $n$ be the order of the ring $\mathbb{Z}_n$ and $m \in \mathbb{Z}_n$. Then if $\text{gcd}(m,n)=1, m$ is a unit.
Proof: Since $\text{gcd}(m,n)=1$, for some $a, b \in \mathbb{Z}$, we have $an+bm=1$ by Bezout's Identity. By the Division Algorithm, we know there exist integers $q$ and $r$ such that $0 \leq r \leq n1$ and $b=nq+r$. Then
\[
rm = (bnq)m = bmnqm = (1an)nqm = 1  n(a+qm).
\]
Since we live in the ring $\mathbb{Z}_n$, multiplication is commutative and multiples of $n$ will reduce to $0$ (mod $n$). Therefore $n(a+qm) \equiv 0 $ (mod $n$), and $rm=mr=1$. Since we have found a multiplicative inverse for $m \in \mathbb{Z}_n$, we conclude that $m$ is a unit. $\quad \boxtimes$
There are special rings in which every nonzero element is a ring called fields. Intuively, fields are algebraic structures in which division is legal, since every element is a unit and therefore will remain in the field if "divided by" (multiplied by the multiplicative inverse) of another element in the field. From the previous theorem, the wonderful fact that $\mathbb{Z}_p$ is a field for $p$ a prime follows! Notice that every nonzero element of $\mathbb{Z}_p$ is relatively prime to $p$ (that is, gcd$(m,p)=1$ for $m \in \mathbb{Z}_p$) since $p$ is a prime number, so every nonzero element of $\mathbb{Z}_p$ is a unit, hence $\mathbb{Z}_p$ is a field.
Algebra is all about analyzing structure. Like a phoenix rising from its ashes, a group emerges from this field. We claim that the set of units in $\mathbb{Z}_p$ form a group for $p$ a prime. The hardest part about showing that this is a group has already been done! That is, showing that every element has an inverse, which is trivial since every element is a unit.
We are ready to prove Fermat's Little Theorem. Before we do, note that $p$ doesn't necessarily have to be prime for the set of units in $\mathbb{Z}_p$ to from a group. However, since $p$ is prime, we know that every nonzero element is a unit, and therefore we can easily classify this group as consisting of the elements
\[
\{1,2,3, ... ,p1\}
\]
and having order $p1$. Let's denote our new group as $\mathbb{Z}_p^*$. For convenience, we'll restate Fermat's Little Theorem.
Theorem (The Little Theorem of Fermat): Let $a \in \mathbb{Z}$, $p$ a prime. If $p$ doesn't divide $a$, then $p$ divides $a^{p1}  1$, that is,
\[
a^{p1} \equiv 1 \, (\text{mod} \, p).
\]
Proof: We can restate Fermat's Little Theorem as such: For all $a \in \mathbb{Z}_p^*$, $a^{p1} \equiv 1$ (mod $p$). Recall a previous theorem: If $G$ is a finite group under multiplication, then $a^{G}=1$ for all $a \in G$. Clearly $\mathbb{Z}_p^*$ is a finite group under multiplication, and the order of $\mathbb{Z}_p^*$ is $p1$, that is, $\mathbb{Z}_p^*=p1$. Therefore, for all $a \in \mathbb{Z}_p^*$,
\[
a^{p1} \equiv a^{\mathbb{Z}_p^*} \equiv 1 \, (\text{mod} \, p). \quad \boxtimes
\]
(For the curious reader: Here is the reasoning behind why the two statements of FLT are equivalent. Every integer will correspond to a coset of the quotient ring $\mathbb{Z}/p\mathbb{Z}.$ In other words, if $b \in \mathbb{Z}$, then $b \in a+p\mathbb{Z}$ for some $0 \lt a \leq p1$ $(a \in \mathbb{Z}/p\mathbb{Z}),$ which implies $a+p\mathbb{Z}=b+p\mathbb{Z}$ by the definition of cosets. So we can restate FLT as such: $\forall b+p\mathbb{Z}, (b+p\mathbb{Z})^{p1} \equiv 1$ (mod $p$) which implies $\forall a+p\mathbb{Z}, (a+p\mathbb{Z})^{p1} \equiv 1$ (mod $p$). Since $\mathbb{Z}/p\mathbb{Z}$ and $\mathbb{Z}_p$ are naturally isomorphic as rings (similarily, $\mathbb{Z}/p\mathbb{Z}^*$ and $\mathbb{Z}_p^*$ as groups), this statement is the same as $\forall a \in \mathbb{Z}_p, a^{p1} \equiv 1$ (mod $p$). Therefore, FLT is equivalent to the alternate statement.)
Once again, this post has grown quite long. I'll wrap this series up in the next post, look forward to it!
Fri, 17 Jul 2020 16:19
Proving RSA Encryption: An Application of Group Theory (Part 1: Asymmetric Encryption)
[link—standalone]
If someone receives an email from me, they might notice a peculiar "signature.asc" file attached at the bottom. This is not a bug— this is so the recipient can decrypt this file with the RSA encryption algorithm and verify that the sender is in fact, me, and the message has not been tampered with.
In essence, how the RSA algorithm works is by multiplying two very large prime numbers, known only to the recipient of the message, and making the product public (this is known as a public key). The sender uses the product to encrypt a message using an algorithm that only someone with knowledge of the two prime numbers (known as the private key) can decrypt. We can leave the question of the computational difficulty of integer factorization, i.e., whether or not there exists an algorithm that factors large prime numbers in polynomial time, to the computer scientists (hopefully there isn't, or else a large portion of modern day encryption becomes useless).
What we're concerned with, as mathematicians, is the validity of the algorithm. We want to know, given any message, will the recipient always be able to decrypt the encrypted message with their private key and get the right message? Let's get right into the math— all you'll need to be familiar with is some basic number theory.
Definition: Let $n$ be the product of two prime numbers $p$ and $q$ respectively, and $s$ be an integer relatively prime to $(p1)(q1)$. Let $p, q,$ and $s$ be part of the private key, and let $n$ and $r$ be part of the public key, where $r$ is the multiplicative inverse of $s$ mod $(p1)(q1)$. That is, $rs \equiv 1$ mod $(p1)(q1)$.
The definitions may seem a little strange and arbitrary— no worries! Everything will fall into place over time. For now, our goal is to show that, using the public and private keys, the sender and recipient can successfully complete an encryption/decryption exchange. Let's define what that is real quick: in this case, let $m$ be the message being encrypted (Note that $m$ is an integer— there are several ways to encode messages as integers that we will not cover here for the sake of brevity).
Encryption: Let $e$ equal the encrypted message, which is determined by
\[
e \equiv m^r \, (\text{mod} \, n).
\]
This is simple since $n$ and $r$ are known, and a computer can easily perform modular exponentiation in a reasonable amount of time.
Decryption: After receiving the encrypted message $e$, we decrypt it by computing
\[
e^s \equiv (m^r)^s \equiv m^{rs} \equiv m \, (\text{mod} \, n).
\]
We can do this because $s$ is part of the private key. Note that in this case, we're dealing with a form of asymmetric encryption, since anyone with the public key can encrypt a message but only the owner of the private key can decrypt it. Everything makes sense up until the last implication. How does $m^{rs} \equiv m \, (\text{mod} \, n)$?
We need a lemma.
Lemma: Let $n = pq$, where $p$ and $q$ are two distinct primes. If $a$ is an integer relatively prime to $n$ and $w$ is an integer such that $w=1$ mod $(p1)(q1)$, then
\[
a^w \equiv a \, (\text{mod} \, n).
\]
Proof: Left as an exercise to the reader.
Just kidding. This lemma is built off a powerful theorem, whose proof relies heavily on group theory. Since this post is getting rather long, I'll prove it in the next post, as well as shed some light on the intuition behind choosing the oddly specific number $(p1)(q1)$ for $s$ to be relatively prime to. Now that we have this lemma, the equivalence
\[
m^{rs} \equiv m \, (\text{mod} \, n)
\]
follows from the fact that we defined $r$ to be the multiplicative inverse of $s$ (so $rs \equiv 1$ mod $(p1)(q1)$), and the lemma above, setting $a$ equal to $m$ and $w$ equal to $rs$.
And there we have it! Look out for part two, where I'll talk about digital signatures, group theory, and implementing this in real life.
Tue, 14 Jul 2020 16:33
Why does my website look so old?
[link—standalone]
Here are some reasons why.
The web is bloated
In a world of overdesigning and excessive emphasis on the graphical user experience, the internet has become massively bloated. How many times have you loaded up a website for it to look like this? Or what about this? (I don't condone swearing by the way).
Either way, "modern" websites have become an abomination. The original premise that websites should be distributed, accessible, and convey information effectively has been abandoned in favor of advertisements, bloat, and rampant censorship.
Selfsufficiency on the Internet
When I set out to create this website, I had two goals, the first of which was selfsufficiency. I own the VPS this is hosted on and set up the SSL certificate/nginx myself (with the help of some open source software). I write blog posts in HTML and upload them without the help of a static site generator.
Sure, I gain some functionality and save some time if I host my site with GitHub or use Wordpress as a backend for my blog. However, that comes at the cost of full control over my website. The miniscule time and cost investment is a small price to pay for actually owning your own things (Doesn't that sound ridiculous? Most people don't online).
Less is more
My second goal was a design goal: KISS, or, Keep It Simple, Smy friend. I'm a minimalist. I don't like owning branded Tshirts or physical papers. I don't even own a mouse! It doesn't make sense for me to overengineer a simple static textbased website. I only load what I need, and I don't abuse Javascript like Google (see: Google Docs protected mode. How does even exist??).
If you're curious, check out the source code— of course, open source is essential for digital freedom. In terms of design, I took inspiration from these three websites (once again, I don't condone swearing). There's one more reason left as to why my website looks like this...
The final reason
I'm lazy and not very good at HTML/CSS.
Sun, 12 Jul 2020 15:13
Topological Continuity: Simplicity in Abstraction
[link—standalone]
Abstraction is often looked down upon by engineers and physicists as something mathematicians do to make life harder for themselves, obfuscate simple things, and make math less applicable in real life.
However, the opposite is actually true— restricting your viewpoint to one set or space (say, \( \mathbb{R}^n \)) makes things much more complicated than they have to be. Beauty and simplicity lie in levels of abstraction. Let's take a look at an example, some prerequisites include basic knowledge of metric spaces and topological spaces.
Here's the standard rigorous definition of continuity from Real Analysis. We say \( f: \mathbb{R} \to \mathbb{R} \) is continuous at \( x_0 \in \mathbb{R} \) if for all \( \epsilon > 0 \), there exists a \( \delta > 0 \) such that
\[
x  x_0 < \delta \implies f(x)f(x_0) < \epsilon
\]
for all \( x \in \mathbb{R} \). Of course the domain doesn't have to be \( \mathbb{R} \), it can be any interval, say \( A \). When a function is continuous for all \( x_0 \in A \), it's
continuous on \( A \). If a function is continuous on its domain, we just say it's
continuous.
That probably triggered some flashbacks for some poor Calculus students learning about limits for the first time (I too, feared the epsilondelta dynamic duo up until I took Analysis). It doesn't have to be this scary! Forget the strange Greek letters, let's make this definition more powerful and simple at the same time.
Observe that the absolute value signs are just measuring the distance between two points in $\mathbb{R}$, indeed, this seems like a job more suited for metric spaces. Let $\mathbb{R}$ be equipped with the standard metric $d$ such that $d: \mathbb{R} \times \mathbb{R} \to \mathbb{R}, \, d(x,y) = xy$. You can easily check that $d$ satisfies the conditions for a metric, so $(\mathbb{R},d)$ is a metric space. Now let's rewrite our previous definition in metric terms: \(f: \mathbb{R} \to \mathbb{R} \) is continuous at \( x_0 \in \mathbb{R} \) if for all \( \epsilon > 0 \), there exists a \( \delta > 0 \) such that
\[
d(x, x_0) < \delta \implies d(f(x),f(x_0)) < \epsilon.
\]
Recall the definition of an open ball in a metric space. Let $(X,d)$ be a metric space, $x_0 \in X, \gamma \gt 0$. Then we define an open ball as
\[ B(x_0, \gamma) = \{ x \in X \mid d(x,x_0) < \gamma \}. \]
Intuitively, it's the set of all points a certain distance gamma away from another point $x_0$. Notice that the two expressions in the definition just refer to points $(x \in \mathbb{R})$ a certain positive distance $(\epsilon, \delta)$ away from another point $(x_0)$, hmmm...
\[
d(x, x_0) < \delta \implies d(f(x),f(x_0)) < \epsilon
\]
implies that
\[
x \in B(x_0, \delta) \implies f(x) \in B(f(x_0), \epsilon)
\]
which subsequently implies
\[
f\left[B(x_0, \delta) \right] \subset B(f(x_0),\epsilon).
\]
The last implication may seem somewhat arcane, but just look closer and you'll see it quickly follows from the definition of the image of a set under a function. But wait! We haven't even seen the final form of continuity yet! Recall that every metric space generates a topology, and that the metric topology is generated by defining open balls as open sets... Now we're ready to define the final layer of abstraction, topological continuity.
Definition: Let $(X, \tau_x )$ and $(Y, \tau_y )$ be topological spaces. Then a function $f: X \to Y$ is continuous if for all open sets $H \in \tau_y, \, f^{1}(H) \in \tau_x.$
The moment of truth: we transformed a convoluted epsilon delta definition into a simple, elegant, and much more widely applicable statement: "A function is continuous if the preimage of an open set is open." Not only is this statement much simpler, it also applies to all sorts of spaces— the set of binary sequences, a really long line, cofinite sets, anything. Isn't that great?
Sat, 11 Jul 2020 14:05