That’s The Secret Of Programming.

That’s The Secret Of Programming.

That’s the secret of programming.

More Posts from Jupyterjones and Others

6 years ago
The Simple Harmonic Oscillator

The simple harmonic oscillator

Anonymous asked: Please explain the intuition of solving the SHM equation.

Okay Anon! Here you go, this is my rendition.

The problem

You have a mass suspended on a spring. We want to know where the mass will be at any instant of time.

Describe the motion of the mass

image

The physical solution

Now before we get on to the math, let us first visualize the motion by attaching a spray paint bottle as the mass.

image

Oh, wait that seems like a function that we are familiar with - The sinusoid.

image

Without even having to write down a single equation, we have found out the solution to our problem. The motion that is traced  by the mass is a sinusoid.

But what do I mean by a sinusoid ?

If you took the plotted paper and tried to create that function with the help of sum of polynomials i.e x, x2, x3 … Now you this what it would like :

image

By taking an infinite of these polynomial sums you get the function Since this series of polynomial occurs a lot, its given the name - sine.

image

I hope this shed some light on the intuition of the SHM equation. Have fun!

4 years ago

Depixellation? Or hallucination?

There’s an application for neural nets called “photo upsampling” which is designed to turn a very low-resolution photo into a higher-res one.

Three pixellated faces are turned into higher-resolution versions. The higher-resolution images look pretty realistic, even if there are small weirdnesses about their teeth and hair

This is an image from a recent paper demonstrating one of these algorithms, called “PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models”

It’s the neural net equivalent of shouting “enhance!” at a computer in a movie - the resulting photo is MUCH higher resolution than the original.

Could this be a privacy concern? Could someone use an algorithm like this to identify someone who’s been blurred out? Fortunately, no. The neural net can’t recover detail that doesn’t exist - all it can do is invent detail.

This becomes more obvious when you downscale a photo, give it to the neural net, and compare its upscaled version to the original.

Left: Luke Skywalker (The Last Jedi, probably) in a blue hood. Center: Highly pixelated version of the lefthand image. Right: Restored image is a white person facing the camera straight on - instead of a hood, they have wispy hair, and the lips are where Luke’s chin used to be.

As it turns out, there are lots of different faces that can be downscaled into that single low-res image, and the neural net’s goal is just to find one of them. Here it has found a match - why are you not satisfied?

And it’s very sensitive to the exact position of the face, as I found out in this horrifying moment below. I verified that yes, if you downscale the upscaled image on the right, you’ll get something that looks very much like the picture in the center. Stand way back from the screen and blur your eyes (basically, make your own eyes produce a lower-resolution image) and the three images below will look more and more alike. So technically the neural net did an accurate job at its task.

Left: Kylo Ren from the shoulders up. Center: highly pixelated (16x16) version of the previous image. Right: Where Kylo’s cheekbones were, there’s now voldemort-like eyes. Where his chin was, is now the upper lip of someone whose lower face is lost in shadow.

A tighter crop improves the image somewhat. Somewhat.

Left: Kylo Ren cropped tightly to the head. Center: Pixelated version of the picture on the left. Right: Reconstructed version looks a bit like that one photo of Jon Snow with closed eyes.

The neural net reconstructs what it’s been rewarded to see, and since it’s been trained to produce human faces, that’s what it will reconstruct. So if I were to feed it an image of a plush giraffe, for example…

Left: the head of a plush giraffe  Center: 16x16 version of the previous image  Right: reconstructed to look a bit like Benedict Cumberbatch, if he had rather orange skin and glowing blue eyes and a couple of diffuse blobs floating on either side of his head.

Given a pixellated image of anything, it’ll invent a human face to go with it, like some kind of dystopian computer system that sees a suspect’s image everywhere. (Building an algorithm that upscales low-res images to match faces in a police database would be both a horrifying misuse of this technology and not out of character with how law enforcement currently manipulates photos to generate matches.)

However, speaking of what the neural net’s been rewarded to see - shortly after this particular neural net was released, twitter user chicken3gg posted this reconstruction:

Left: Pixelated image of US President Obama  Right: “Reconstructed” image of a white man vaguely resembling Adam Sandler

Others then did experiments of their own, and many of them, including the authors of the original paper on the algorithm, found that the PULSE algorithm had a noticeable tendency to produce white faces, even if the input image hadn’t been of a white person. As James Vincent wrote in The Verge, “It’s a startling image that illustrates the deep-rooted biases of AI research.”

Biased AIs are a well-documented phenomenon. When its task is to copy human behavior, AI will copy everything it sees, not knowing what parts it would be better not to copy. Or it can learn a skewed version of reality from its training data. Or its task might be set up in a way that rewards - or at the least doesn’t penalize - a biased outcome. Or the very existence of the task itself (like predicting “criminality”) might be the product of bias.

In this case, the AI might have been inadvertently rewarded for reconstructing white faces if its training data (Flickr-Faces-HQ) had a large enough skew toward white faces. Or, as the authors of the PULSE paper pointed out (in response to the conversation around bias), the standard benchmark that AI researchers use for comparing their accuracy at upscaling faces is based on the CelebA HQ dataset, which is 90% white. So even if an AI did a terrible job at upscaling other faces, but an excellent job at upscaling white faces, it could still technically qualify as state-of-the-art. This is definitely a problem.

A related problem is the huge lack of diversity in the field of artificial intelligence. Even an academic project with art as its main application should not have gone all the way to publication before someone noticed that it was hugely biased. Several factors are contributing to the lack of diversity in AI, including anti-Black bias. The repercussions of this striking example of bias, and of the conversations it has sparked, are still being strongly felt in a field that’s long overdue for a reckoning.

Bonus material this week: an ongoing experiment that’s making me question not only what madlibs are, but what even are sentences. Enter your email here for a preview.

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore

7 years ago
Hey Guys, I’m Observing A High School Class And Was Looking At A Textbook, And Learned That Irrationals

Hey guys, I’m observing a high school class and was looking at a textbook, and learned that irrationals are closed under addition! Super cool, who knew!


Tags
5 years ago
Planetary Frequencies.

Planetary Frequencies.

6 years ago

There are 27 straight lines on a smooth cubic surface (always; for real!)

This talk was given by Theodosios Douvropoulos at our junior colloquium.

I always enjoy myself at Theo’s talks, but he has picked up Vic’s annoying habit of giving talks that are nearly impossible to take good notes on. This talk was at least somewhat elementary, which means that I could at least follow it while being completely unsure of what to write down ;)

——

A cubic surface is a two-dimensional surface in three dimensions which is defined by a cubic polynomial. This statement has to be qualified somewhat if you want to do work with these objects, but for the purpose of listening to a talk, this is all you really need.

The amazing theorem about smooth cubic surfaces was proven by Arthur Cayley in 1849, which is that they contain 27 lines. To be clear, “line” in this context means an actual honest-to-god straight line, and by “contain” we mean that the entire line sits inside the surface, like yes all of it, infinitely far in both directions, without distorting it at all. 

image

(source)

[ Okay, fine, you have to make some concession here: the field has to be algebraically closed and the line is supposed to be a line over that field. And $\Bbb R$ is not algebraically closed, so a ‘line’ really means a complex line, but that’s not any less amazing because it’s still an honest, straight, line. ]

This theorem is completely unreasonable for three reasons. First of all, the fact that any cubic surface contains any (entire) lines at all is kind of stunning. Second, the fact that the number of lines that it contains is finite is it’s own kind of cray. And finally, every single cubic surface has the SAME NUMBER of lines?? Yes! always; for real!

All of these miracles have justifications, and most of them are kind of technical. Theo spent a considerable amount of time talking about the second one, but after scribbling on my notes for the better part of an hour, I can’t make heads or tails of them. So instead I’m going to talk about blowups.

I mentioned blowups in the fifth post of the sequence on Schubert varieties, and I dealt with it fairly informally there, but Theo suggested a more formal but still fairly intuitive way of understanding blowups at a point. The idea is that we are trying to replace the point with a collection of points, one for each unit tangent vector at the original point. In particular, a point on any smooth surface has a blowup that looks like a line, and hence the blowup in a neighborhood of the point looks like this:

image

(source)

Here is another amazing fact about cubic surfaces: all of them can be realized as a plane— just an ordinary, flat (complex) 2D plane— which has been blown up at exactly six points. These points have to be “sufficiently generic”; much like in the crescent configuration situation, you need that no two points lie on the same line, and the six points do not all lie on a conic curve (a polynomial of degree 2).

In fact, it’s possible, using this description to very easily recover 21 of the 27 lines. Six of the lines come from the blowups themselves, since points blow up into lines. Another fifteen of them come from the lines between any two locations of blowup. This requires a little bit of work: you can see in the picture that the “horizontal directions” of the blowup are locally honest lines. Although most of these will become distorted near the other blowups, precisely one will not: the height corresponding to the tangent vector pointing directly at the other blowup point.

The remaining six points are can also be understood from this picture: they come from the image of the conic passing through five of the blowup points. I have not seen a convincing elementary reason why this should be true; the standard proof is via a Chow ring computation. If you know anything about Chow rings, you know that I am not about to repeat that computation right here.

This description is nice because it not only tells us how many lines there are, but also it roughly tells us how the lines intersect each other. I say “roughly” because you do have to know a little more about what’s going on with those conics a little more precisely. In particular, it is possible for three lines on a cubic surface to intersect at a single point, but this does not always happen.

I’ll conclude in the same way that Theo did, with a rushed comment about the fact that “27 lines on a cubic” is one part of a collection of relations and conjectured relations that Arnold called the trinities. Some of these trinities are more… shall we say… substantiated than others… but in any case, the whole mess is Laglandsian in scope and unlikely even to be stated rigorously, much less settled, in our lifetimes. But it makes for interesting reading and good fodder for idle speculation :)

7 years ago

Regarding Fractals and Non-Integral Dimensionality

Alright, I know it’s past midnight (at least it is where I am), but let’s talk about fractal geometry.

Fractals

If you don’t know what fractals are, they’re essentially just any shape that gets rougher (or has more detail) as you zoom in, rather than getting smoother. Non-fractals include easy geometric shapes like squares, circles, and triangles, while fractals include more complex or natural shapes like the coast of Great Britain, Sierpinski’s Triangle, or a Koch Snowflake.

Regarding Fractals And Non-Integral Dimensionality

Fractals, in turn, can be broken down further. Some fractals are the product of an iterative process and repeat smaller versions of themselves throughout them. Others are more natural and just happen to be more jagged.

Regarding Fractals And Non-Integral Dimensionality

Fractals and Non-Integral Dimensionality

Now that we’ve gotten the actual explanation of what fractals are out of the way, let’s talk about their most interesting property: non-integral dimensionality. The idea that fractals do not actually have an integral dimension was originally thought up by this guy, Benoit Mandelbrot.

Regarding Fractals And Non-Integral Dimensionality

He studied fractals a lot, even finding one of his own: the Mandelbrot Set. The important thing about this guy is that he realized that fractals are interesting when it comes to defining their dimension. Most regular shapes can have their dimension found easily: lines with their finite length but no width or height; squares with their finite length and width but no height; and cubes with their finite length, width, and height. Take note that each dimension has its own measure. The deal with many fractals is that they can’t be measured very easily at all using these terms. Take Sierpinski’s triangle as an example.

Regarding Fractals And Non-Integral Dimensionality

Is this shape one- or two-dimensional? Many would say two-dimensional from first glance, but the same shape can be created using a line rather than a triangle.

Regarding Fractals And Non-Integral Dimensionality

So now it seems a bit more tricky. Is it one-dimensional since it can be made out of a line, or is it two-dimensional since it can be made out of a triangle? The answer is neither. The problem is that, if we were to treat it like a two-dimensional object, the measure of its dimension (area) would be zero. This is because we’ve technically taken away all of its area by taking out smaller and smaller triangles in every available space. On the other hand, if we were to treat it like a one-dimensional object, the measure of its dimension (length) would be infinity. This is because the line keeps getting longer and longer to stretch around each and every hole, of which there are an infinite number. So now we run into a problem: if it’s neither one- nor two-dimensional, then what is its dimensionality? To find out, we can use non-fractals

Measuring Integral Dimensions and Applying to Fractals

Let’s start with a one-dimensional line. The measure for a one-dimensional object is length. If we were to scale the line down by one-half, what is the fraction of the new length compared to the original length?

Regarding Fractals And Non-Integral Dimensionality

The new length of each line is one-half the original length.

Now let’s try the same thing for squares. The measure for a two-dimensional object is area. If we were to scale down a square by one-half (that is to say, if we were to divide the square’s length in half and divide its width in half), what is the fraction of the new area compared to the original area?

Regarding Fractals And Non-Integral Dimensionality

The new area of each square is one-quarter the original area.

If we were to try the same with cubes, the volume of each new cube would be one-eighth the original volume of a cube. These fractions provide us with a pattern we can work with.

In one dimension, the new length (one-half) is equal to the scaling factor (one-half) put to the first power (given by it being one-dimensional).

In two dimensions, the new area (one-quarter) is equal to the scaling factor (one-half) put to the second power (given by it being two-dimensional).

In three dimensions, the same pattern follows suit, in which the new volume (one-eighth) is equivalent to the scaling factor (one-half) put to the third power.

We can infer from this trend that the dimension of an object could be (not is) defined as the exponent fixed to the scaling factor of an object that determines the new measure of the object. To put it in mathematical terms:

Regarding Fractals And Non-Integral Dimensionality

Examples of this equation would include the one-dimensional line, the two-dimensional square, and the three-dimensional cube:

½ = ½^1

¼ = ½^2

1/8 = ½^3

Now this equation can be used to define the dimensionality of a given fractal. Let’s try Sierpinski’s Triangle again.

Regarding Fractals And Non-Integral Dimensionality

Here we can see that the triangle as a whole is made from three smaller versions of itself, each of which is scaled down by half of the original (this is proven by each side of the smaller triangles being half the length of the side of the whole triangle). So now we can just plug in the numbers to our equation and leave the dimension slot blank.

1/3 = ½^D

To solve for D, we need to know what power ½ must be put to in order to get 1/3. To do this, we can use logarithms (quick note: in this case, we can replace ½ with 2 and 1/3 with 3).

log_2(3) = roughly 1.585

So we can conclude that Sierpinski’s triangle is 1.585-dimensional. Now we can repeat this process with many other fractals. For example, this Sierpinski-esque square:

Regarding Fractals And Non-Integral Dimensionality

It’s made up of eight smaller versions of itself, each of which is scaled down by one-third. Plugging this into the equation, we get

1/8 = 1/3^D

log_3(8) = roughly 1.893

So we can conclude that this square fractal is 1.893-dimensional.

We can do this on this cubic version of it, too:

Regarding Fractals And Non-Integral Dimensionality

This cube is made up of 20 smaller versions of itself, each of which is scaled down by 1/3.

1/20 = 1/3^D

log_3(20) = roughly 2.727

So we can conclude that this fractal is 2.727-dimensional.


Tags
5 years ago

Note to self: debugging

always remember to remove the Log.d(”Well fuck you too”) lines and the variables named “wtf” and “whhhyyyyyyy” before pushing anything to git

7 years ago
Geometry At Work: Maxwell, Escher And Einstein
Geometry At Work: Maxwell, Escher And Einstein
Geometry At Work: Maxwell, Escher And Einstein

Geometry at work: Maxwell, Escher and Einstein

Maxwell’s diagram

from the 1821 “A Philosophical Magazine”, showing the rotative vortexes of electromagnetic forces, represented by hexagons and the inactive spaces between them

The impossible cube

invented in 1958, as an inspiration for his Belvedere litography.

Geometry of space-time

The three dimensions of space and the dimension of time give shape to the fourth, that of space-time.


Tags
  • prophetofkirbo
    prophetofkirbo liked this · 2 years ago
  • cu1lthe0ry
    cu1lthe0ry liked this · 4 years ago
  • arseneraoullupin
    arseneraoullupin liked this · 4 years ago
  • dallama
    dallama liked this · 4 years ago
  • hannivanhart
    hannivanhart liked this · 4 years ago
  • yourescaringmywifeandchildr-blog
    yourescaringmywifeandchildr-blog liked this · 5 years ago
  • socketlaunch
    socketlaunch liked this · 5 years ago
  • thejordandev
    thejordandev liked this · 5 years ago
  • oneheartsewing
    oneheartsewing liked this · 5 years ago
  • analeerose
    analeerose liked this · 5 years ago
  • hkkfm
    hkkfm liked this · 5 years ago
  • carojane
    carojane liked this · 5 years ago
  • pendragontrash
    pendragontrash liked this · 5 years ago
  • theoriginalsextherapist
    theoriginalsextherapist liked this · 5 years ago
  • skcirthinq
    skcirthinq reblogged this · 5 years ago
  • meat-clerk-blog
    meat-clerk-blog liked this · 5 years ago
  • useless-asexual273
    useless-asexual273 reblogged this · 5 years ago
  • useless-asexual273
    useless-asexual273 liked this · 5 years ago
  • theclimbingnerd
    theclimbingnerd reblogged this · 5 years ago
  • bone-pirate
    bone-pirate liked this · 5 years ago
  • coltesque
    coltesque liked this · 5 years ago
  • hazelnut-pup
    hazelnut-pup liked this · 5 years ago
  • sirdunkel
    sirdunkel reblogged this · 5 years ago
  • sirkflowers
    sirkflowers liked this · 5 years ago
  • calciumironsilicate
    calciumironsilicate reblogged this · 5 years ago
  • calciumironsilicate
    calciumironsilicate liked this · 5 years ago
  • queen-of-caffeine
    queen-of-caffeine reblogged this · 5 years ago
  • sweetreminiscence
    sweetreminiscence reblogged this · 5 years ago
  • sweetreminiscence
    sweetreminiscence liked this · 5 years ago
  • nezinaukamsitaspuslapis
    nezinaukamsitaspuslapis liked this · 5 years ago
  • blueeyedjackal
    blueeyedjackal reblogged this · 5 years ago
  • bluesuperkitty
    bluesuperkitty liked this · 5 years ago
  • mechanicalsquid
    mechanicalsquid liked this · 5 years ago
  • delaxandra
    delaxandra liked this · 5 years ago
  • melfeena
    melfeena reblogged this · 5 years ago
  • melfeena
    melfeena liked this · 5 years ago
  • ladyvampire
    ladyvampire liked this · 5 years ago
  • glimpseofadaydream
    glimpseofadaydream liked this · 5 years ago
  • mina-no
    mina-no liked this · 5 years ago
  • calleesi
    calleesi liked this · 5 years ago
  • hiddlegirl
    hiddlegirl liked this · 5 years ago
  • andanrenaufal
    andanrenaufal liked this · 5 years ago
  • xavierexpialadocious
    xavierexpialadocious reblogged this · 5 years ago
jupyterjones - Productivity Please !!!
Productivity Please !!!

57 posts

Explore Tumblr Blog
Search Through Tumblr Tags