Suppose you’re building a widget that performs some simple action, which ends in either success or failure. You decide it needs to succeed 75% of the time before you’re willing to release it. You run ten tests, and see that it succeeds exactly 8 times. So you ask yourself, is that really good enough? Do you believe the test results? If you ran the test one more time, and it failed, you would have only an 72.7% success rate, after all.
So when do you have enough data, and how do you make the decision that your success rate is ‘good enough?’ In this post, we’ll look at how the Beta distribution helps us answer this question. First, we’ll get some intuition for the Beta distribution, and then discuss why it’s the right distribution for the problem.
鲸鱼加速器vp-快连加速器app
Consider the widget’s tests as independent boolean variables, governed by some hidden parameter μ, so that the test succeeds with probability μ. Our job, then, is to estimate this parameter: We want a model for P(μ=x | s, f ), the probability distribution of μ, conditioned on our observations of success and failure. This is a continuous probability distribution, with μ a number between zero and one. (This general setup, by the by, is called ‘parameter estimation‘ in the stats literature, as we’re trying to estimate the parameters of a well-known distribution.)
Continue reading →
鲸鱼加速器vp-快连加速器app
sdenton4
I recently had the pleasure of reading James Scott’s “Seeing Like a State,” which examines a certain strain of failure in large centrally-organized projects. These failures come down to the kinds of knowledge available to administrators and governments: aggregates and statistics, as opposed to the kinds of direct experience available to the people living ‘on the ground,’ in situations where the centralized knowledge either fails to or has no chance to describe a complex reality. The book classifies these two different kinds of knowledge as techne (general knowledge) and metis (local knowledge). In my reading, the techne – in both strengths and shortcomings – bears similarity to the knowledge we obtain from traditional algorithms, while metis knowledge is just starting to become available via statistical learning algorithms.
In this (kinda long) post, I will outline some of the major points of Scott’s arguments, and look at how they relate to modern machine learning. In particular, the divides Scott observes between the knowledge of administrators and the knowledge of communities suggest an array of topics for research. Beyond simply looking at the difference between the ways that humans and machines process data, we observe areas where traditional, centralized data analysis has systematically failed. And from these failures, we glean suggestions of where we need to improve machine learning systems to be able to solve the underlying problems.
For me, the book emphasized the importance of overcoming (or circumventing) boundaries in the pursuit of scientific progress. Von Neumann in particular became obsessed with applications (particularly after Godel’s theorem put an end to the Hilbert programme), and served as a bridge between pure and applied mathematics. Meanwhile, construction of the physical computer brought in a variety of brilliant engineers. It’s clear that the departmental politics at the IAS were still quite strong – the pure mathematicians didn’t have much regard for the engineers, and the computer project ground to a halt quite senselessly after von Neumann left the IAS. Dyson argues that Princeton missed an opportunity to be a world center for computing theory and practice as a result.
Continue reading →
鲸鱼加速器vp-快连加速器app
sdenton41 Comment
I’ve been playing around with Cities: Skylines recently, the super-popular SimCity knock-off. Dealing with traffic is a core theme of the game (as it should be). Traffic tends to accumulate at intersections, and it’s well known that one-way streets have higher traffic flow. The logical conclusion, then, is to try to build a city with a single extremely long one-way street… Unfortunately we have to compromise on this perfect vision, because people want to get back home after work and so on.
鲸鱼加速器vp-快连加速器app
Meanwhile, a space-filling curve is an mathematical invention of the 19th century, and one of the earlier examples of a fractal. The basic idea is to define a path that passes through every point of a square, while also being continuous. This is accomplished by defining a sequence of increasingly twisty paths (H1, H2, H3, …) in such a way that H∞ is well-defined and continuous. Of course, we don’t want a infinitely twisty road, but the model of the space filling curve will still be useful to us.
There are a few important ideas in the space filling curve. The first is a notion that by getting certain properties right in the sequence of curves H1, H2, H3, …, we’ll be able to carry those properties on to the limit curve H∞.
The second main idea is how to get continuity. Thinking of the curve as a function where you’re at the start at time 0 , and you always get to the end at time 1, we want an H∞ where small changes in time produce small changes in position. The tricky part here is that the path itself gets longer and longer as we try to fill the square, potentially making continuity hard to satisfy… When the length of the path doubles, you’re moving twice as fast.
In fact, because of continuity, you can also “go backwards:” Given a point in the square, you can approximate what time you would have passed through the point on the limit curve H∞, with arbitrary precision. This gives direct proof that the curve actually covers the whole square.
Here’s an example of a space filling curve which is not continuous. Define Bk as the curve you get from following these instructions:
Start in the lower-left corner.
Go to the top of the square, and then move right by 1/k.
Move to the bottom of the square, and move right by 1/k.
Repeat steps 2 and 3 until you get to the right side of the square.
The problem here is that a very small change in time might take us all the way from the top of the square to the bottom of the square. We need to be twistier to make sure that we don’t jump around in the square. The Moore curve, illustrated above, does his nicely: small changes in time (color) don’t move you from one side of the square to the other.
鲸鱼加速器vp-快连加速器app
What happens if we try to use space filling curves to build a city in Cities: Skylines?
My first attempt at building ‘Hilbertville’ was to make large blocks, with a single, winding one-way road for access, using the design of a (second-order) Hilbert Curve. In addition to the roads, though, I placed a number of pedestrian walkways, which allow people on foot to get in and out of these neighborhoods directly. I like to think that this strongly encourages pedestrian transit, though it’s hard to tell what people’s actual overall commuting choices are from the in-game statistics.
Skylines only allows building directly facing a road; corners tend to lead to empty space. You can see a large empty square in the middle of the two blocks pictured above. There are also two smaller rectangles and two small empty squares inside of each of these two blocks. Making the top ‘loop’ a little bit longer removed most of the internal empty space. This internal space is bad from the game perspective; ideally we would still be able to put a park in the empty spaces to allow people to extra space, but even parks require road access.
Intersections with the main connecting roads end up as ‘sinks’ for all of the traffic congestion. So we should try to reduce the number of such intersections… The Moore curve is a slight variation on the Hilbert curve which puts the ‘start’ and ‘finish’ of the path next to one another. If we merge the start and finish into a wide two-way road, we get this:
We still get the wasted square between neighborhoods, but somewhat reduce the amount of empty interior space. Potentially, we could develop a slightly different pattern and alternate between blocks to eliminate the lost space between blocks. Also, because the entrance and exit to the block coincide, we get to halve the number of intersections with the main road, which is a big win for traffic congestion.
The empty space is actually caused by all of the turns in the road; fewer corners implies fewer wasted patches of land. The easiest way to deal with this is to just use a ‘back-and-forth’ one-way road, without all of the fancy twists.
The other major issue with this style of road design is access to services. Fire trucks in particular have a long way to go to get to the end of a block; the ‘fire danger’ indicators seem to think this is a bad idea. I’m not sure if it’s actually a problem, though, as the amount of traffic within a block is next to none, allowing pretty quick emergency response in spite of the distance.
Overall, I would say it’s a mixed success. There’s not a strong reason to favor the twisty space-filling curves over simpler back-and-forth one-way streets, and in either case the access for fire and trash trucks seems to be an issue. The twistiness of the space-filling curve is mainly used for getting the right amount of locality to ensure continuity in the limit curve; this doesn’t serve a clear purpose in the design of cities, though, and the many turns end up creating difficult-to-access corner spaces. On the bright side, though, traffic is reduced and pedestrian transit is strongly encouraged by the design of the city.
One of my favorite graphics in the book was a scatter plot adapted from a physics paper, mapping four dimensions in a single graphic. It’s pretty typical to deal with data with much more than three dimensions; I was struck by the relative simplicity with which this scatter plot was able to illustrate four dimensional data.
I hacked out a bit of python code to generate similar images; here’s a 4D scatter plot of the Iris dataset:
Continue reading →
Machine Learning Resources for Mathematicians
sdenton4
I met up with some mathematician friends in Toronto yesterday, who were interested in how one goes about getting started on machine learning and data science and such. There’s piles of great resources out there, of course, but it’s probably worthwhile to write a bit about how I got started, and place some resources that might be of more interest to people coming from a similar background. So here goes.
First off, it’s important to understand that machine learning is a gigantic field, with contributions coming from computer science, statistics, and occasionally even mathematics… But on the bright side, most of the algorithms really aren’t that complicated, and indeed they can’t be if they’re going to run at scale. Overall though, you’ll need to learn some 旋风加速器app官网, algorithms, and 安卓加速器.
Oh, and you need to do side-projects. Get your hands dirty with a problem quickly, because it’s the fastest way to actually learn.
Continue reading →
Principal Component Analysis via Similarity
sdenton4
Recently I’ve seen a couple nice ‘visual’ explanations of principal component analysis (PCA). The basic idea of PCA is to choose a set of coordinates for describing your data where the coordinate axes point in the directions of maximum variance, dropping coordinates where there isn’t as much variance. So if your data is arranged in a roughly oval shape, the first principal component will lie along the oval’s long axis.
My goal with this post is to look a bit at the derivation of PCA, with an eye towards building intuition for what the mathematics is doing.
Continue reading →
Kaggle Social Networks Competition
免费外网加速器app
This week I was surprised to learn that I won the Kaggle Social Networks competition!
This was a bit different from other Kaggle competitions. Typically, a Kaggle competition will provide a large set of data and want to optimize some particular number (say, turning anonymized personal data into a prediction of yearly medical costs). The dataset here intrigued me because it’s about learning from and reconstructing graphs, which is a very different kind of problem. In this post, I’ll discuss my approach and insights on the problem.
Continue reading →
免费外网加速器app
sdenton44 Comments
Once upon a time in the late nineties, the internet was a crypto-anarchist’s dream. It was a new trans-national cyberspace, mostly free of the meddling of any kind of government, where information could be exchanged with freedom, anonymity, and (with a bit of work) security. For a certain strain of crypto-anarchist, xf5.app加速器 was a guiding document, advocating small anarchist societies in the blank spaces of existing society temporarily beyond the reach of government surveillance or regulation. This was a great idea with some obvious drawbacks: On the one hand, TAZ served as a direct inspiration for Burning Man. On the other hand, it eventually came out that Peter Lamborn Wilson (who authored TAZ under the pseudonym Hakim Bey) was an advocate of pedophilia, which had clear implications as to why he wanted freedom from regulation. It’s a document whose history highlights the simultaneous boundless possibilities and severe drawbacks of anarchism.
Against this background, Lawrence Lessig’s Code made the case that the internet TAZ was in fact temporary. Lessig argued that the internet’s behaviour is determined by a combination of computer code and legal code, and that while the legal code hadn’t been written yet, it would be soon. His prediction (which has largely been realized) was that the internet would lose its anarchic character through government regulation mixed with a need for security and convenience in commercial transactions. (In addition to these forces, social media also came along, in which people largely sacrificed their anonymity willingly for the convenience of being able to easily communicate with their meatspace social networks.)
In thinking about Bitcoin, it’s useful to see how the regulation came to change the internet. The prediction (again pretty much correct) was that regulations would target large companies instead of individual users. Companies are compelled to follow the law under the ultimate threat of not being allowed to operate at all. Because of the tendency for people to glom onto just a few instances of working solutions, it becomes easy to target a few large entities to enact regulation on a broad base of users.
Continue reading →
My Favorite Linux Command Line Tricks
sdenton42 Comments
This week I’m at the IMA workshop on Modern Applications of Representation Theory. So far it’s been really cool!
One of the graduate students asked me about how one goes about learning the Linux command line, so I thought I would write down a few of the things I think are most useful on a day-to-day basis. Such a list is sure to stir controversy, so feel free to comment if you see a grievous omission. In fact, I learned the command line mainly through installing Gentoo Linux back before there was any automation whatsoever in the process, and suffering through lengthy forum posts getting every bit of my system more-or-less working. (Note: Starting with Gentoo is probably a bad idea. I chose it at the time because it had the best forums, but there are probably better places to start these days. I mainly use Xubuntu these days.)
So, off to the races. I’m going to skip the really, really basic ones, like ls, cd, apt-get and sudo. In fact, there’s a 视频加速器软件_58速运抢单软件加速器 - 随意贴:视频加速器哪个好?天空乐享工具是一款非常好用的简洁高效的网络加速器软件,软件非常小巧,大小1.06MB,非凡软件站为您提供较新较全的视频加速器、视频加速器下载、视频加速器排行榜等相关软件下 … which covers a lot of the basics, including IO redirection. Finally, I’m assuming that one is using the bash terminal.
Continue reading →
tom denton is a slightly-reformed mathematician, working as a machine educator in the bay area. In addition to statistical learning algorithms and combinatorics, tom also is interested in hardware hacking, e-learning, music, and bicycles.
Recent Posts
Evaluating Success Rates with the Beta Distribution
Seeing Like a Statistical Learning Algorithm
Turing’s Cathedral, and the Separation of Math and Industry