Mathematicians will in one breath tell you they aren’t fractions, then in the next tell you dz/dx = dz/dy * dy/dx
Have you seen a mathematician claim that? Because there’s entire algebra they created just so it becomes a fraction.
Brah, chain rule & function composition.
Also multiplying by dx in diffeqs
vietnam flashbacks meme
This is until you do multivariate functions. Then you get for f(x(t), y(t)) this: df/dt = df/dx * dx/dt + df/dy * dy/dt
(d/dx)(x) = 1 = dx/dx
Not very good mathematicians if they tell you they aren’t fractions.
The thing is that it’s legit a fraction and d/dx actually explains what’s going on under the hood. People interact with it as an operator because it’s mostly looking up common derivatives and using the properties.
Take for example
∫f(x) dx
to mean "the sum (∫) of supersmall sections of x (dx) multiplied by the value of x at that point ( f(x) ). This is why there’s dx at the end of all integrals.The same way you can say that the slope at x is tiny f(x) divided by tiny x or
d*f(x) / dx
or more traditionally(d/dx) * f(x)
.The other thing is that it’s legit not a fraction.
it’s legit a fraction, just the numerator and denominator aren’t numbers.
No 👍
try this on – Yes 👎
It’s a fraction of two infinitesimals. Infinitesimals aren’t numbers, however, they have their own algebra and can be manipulated algebraically. It so happens that a fraction of two infinitesimals behaves as a derivative.
Ok, but no. Infinitesimal-based foundations for calculus aren’t standard and if you try to make this work with differential forms you’ll get a convoluted mess that is far less elegant than the actual definitions. It’s just not founded on actual math. It’s hard for me to argue this with you because it comes down to simply not knowing the definition of a basic concept or having the necessary context to understand why that definition is used instead of others…
Why would you assume I don’t have the context? I have a degree in math. I could be wrong about this, I’m open-minded. By all means, please explain how infinitesimals don’t have a consistent algebra.
-
I also have a masters in math and completed all coursework for a PhD. Infinitesimals never came up because they’re not part of standard foundations for analysis. I’d be shocked if they were addressed in any formal capacity in your curriculum, because why would they be? It can be useful to think in terms of infinitesimals for intuition but you should know the difference between intuition and formalism.
-
I didn’t say “infinitesimals don’t have a consistent algebra.” I’m familiar with NSA and other systems admitting infinitesimal-like objects. I said they’re not standard. They aren’t.
-
If you want to use differential forms to define 1D calculus, rather than a NSA/infinitesimal approach, you’ll eventually realize some of your definitions are circular, since differential forms themselves are defined with an implicit understanding of basic calculus. You can get around this circular dependence but only by introducing new definitions that are ultimately less elegant than the standard limit-based ones.
-
clearly, d/dx simplifies to 1/x
I found math in physics to have this really fun duality of “these are rigorous rules that must be followed” and “if we make a set of edge case assumptions, we can fit the square peg in the round hole”
Also I will always treat the derivative operator as a fraction
2+2 = 5
…for sufficiently large values of 2
i was in a math class once where a physics major treated a particular variable as one because at csmic scale the value of the variable basically doesn’t matter. the math professor both was and wasn’t amused
Engineer. 2+2=5+/-1
I mean as an engineer, this should actually be 2+2=4 +/-1.
Statistician: 1+1=sqrt(2)
Computer science: 2+2=4 (for integers at least; try this with floating point numbers at your own peril, you absolute fool)
0.1 + 0.2 = 0.30000000000000004
Freshmen engineer: wow floating point numbers are great.
Senior engineer: actually the distribution of floating point errors is mindfuck.
Professional engineer: the mean error for all pairwaise 64 bit floating point operations is smaller than the Planck constant.
comparing floats for exact equality should be illegal, IMO
pi*pi = g
units don’t match, though
Found the engineer
I always chafed at that.
“Here are these rigid rules you must use and follow.”
“How did we get these rules?”
“By ignoring others.”
is this how Brian Greene was born?
If not fraction, why fraction shaped?
Derivatives started making more sense to me after I started learning their practical applications in physics class.
d/dx
was too abstract when learning it in precalc, but once physics introducedd/dt
(change with respect to time t), it made derivative formulas feel more intuitive, like “velocity is the change in position with respect to time, which the derivative of position” and “acceleration is the change in velocity with respect to time, which is the derivative of velocity”Possibly you just had to hear it more than once.
I learned it the other way around since my physics teacher was speedrunning the math sections to get to the fun physics stuff and I really got it after hearing it the second time in math class.
But yeah: it often helps to have practical examples and it doesn’t get any more applicable to real life than d/dt.
I always needed practical examples, which is why it was helpful to learn physics alongside calculus my senior year in high school. Knowing where the physics equations came from was easier than just blindly memorizing the formulas.
The specific example of things clicking for me was understanding where the “1/2” came from in distance = 1/2 (acceleration)(time)^2 (the simpler case of initial velocity being 0).
And then later on, complex numbers didn’t make any sense to me until phase angles in AC circuits showed me a practical application, and vector calculus didn’t make sense to me until I had to actually work out practical applications of Maxwell’s equations.
yea, essentially, to me, calculus is like the study of slope and a slope of everything slope, with displacement, velocity, acceleration.
Except you can kinda treat it as a fraction when dealing with differential equations
Oh god this comment just gave me ptsd
And discrete math.
Only for separable equations
1/2 <-- not a number. Two numbers and an operator. But also a number.
In Comp-Sci, operators mean stuff like
>
,*
,/
,+
and so on. But in math, an operator is a (possibly symbollic) function, such as a derivative or matrix.Youre not wrong, distinctively, but even in mathematics “/” is considered an operator.
https://en.m.wikipedia.org/wiki/Operation_(mathematics)
oh huh, neat. Always though of those as “operations.”
It was a fraction in Leibniz’s original notation.
And it denotes an operation that gives you that fraction in operational algebra…
Instead of making it clear that
d
is an operator, not a value, and thus the entire thing becomes an operator, physicists keep claiming that there’s no fraction involved. I guess they like confusing people.
The world has finite precision. dx isn’t a limit towards zero, it is a limit towards the smallest numerical non-zero. For physics, that’s Planck, for engineers it’s the least significant bit/figure. All of calculus can be generalized to arbitrary precision, and it’s called discrete math. So not even mathematicians agree on this topic.
Why does using it as a fraction work just fine then? Checkmate, Maths!
It doesn’t. Only sometimes it does, because it can be seen as an operator involving a limit of a fraction and sometimes you can commute the limit when the expression is sufficiently regular
Added clarifying sentence I speak from a physicists point of view.
This very nice Romanian lady that taught me complex plane calculus made sure to emphasize that e^j*theta was just a notation.
Then proceeded to just use it as if it was actually eulers number to the j arg. And I still don’t understand why and under what cases I can’t just assume it’s the actual thing.
Let’s face it: Calculus notation is a mess. We have three different ways to notate a derivative, and they all suck.
Calculus was the only class I failed in college. It was one of those massive 200 student classes. The teacher had a thick accent and hand writing that was difficult to read. Also, I remember her using phrases like “iff” that at the time I thought was her misspelling something only to later realize it was short hand for “if and only if”, so I can’t imagine how many other things just blew over my head.
I retook it in a much smaller class and had a much better time.
e𝘪θ is not just notation. You can graph the entire function ex+𝘪θ across the whole complex domain and find that it matches up smoothly with both the version restricted to the real axis (ex) and the imaginary axis (e𝘪θ). The complete version is:
ex+𝘪θ := ex(cos(θ) + 𝘪sin(θ))
Various proofs of this can be found on wikipeda. Since these proofs just use basic calculus, this means we didn’t need to invent any new notation along the way.
I’m aware of that identity. There’s a good chance I misunderstood what she said about it being just a notation.
It’s not simply notation, since you can prove the identity from base principles. An alien species would be able to discover this independently.
I’ve seen e^{d/dx}
It legitimately IS exponentiation. Romanian lady was wrong.
It is just a definition, but it’s the only definition of the complex exponential function which is well behaved and is equal to the real variable function on the real line.
Also, every identity about analytical functions on the real line also holds for the respective complex function (excluding things that require ordering). They should have probably explained it.
She did. She spent a whole class on about the fundamental theorem of algebra I believe? I was distracted though.
It’s not even a fraction, you can just cancel out the two "d"s
"d"s nuts lmao
Look it is so simple, it just acts on an uncountably infinite dimensional vector space of differentiable functions.
fun fact: the vector space of differentiable functions (at least on compact domains) is actually of countable dimension.
still infinite though
Doesn’t BCT imply that infinite dimensional Banach spaces cannot have a countable basis
Uhm, yeah, but there’s two different definitions of basis iirc. And i’m using the analytical definition here; you’re talking about the linear algebra definition.
So I call an infinite dimensional vector space of countable/uncountable dimensions if it has a countable and uncountable basis. What is the analytical definition? Or do you mean basis in the sense of topology?
Uhm, i remember there’s two definitions for basis.
The basis in linear algebra says that you can compose every vector v as a finite sum v = sum over i from 1 to N of a_i * v_i, where a_i are arbitrary coefficients
The basis in analysis says that you can compose every vector v as an infinite sum v = sum over i from 1 to infinity of a_i * v_i. So that makes a convergent series. It requires that a topology is defined on the vector space fist, so convergence becomes well-defined. We call such a vector space of countably infinite dimension if such a basis (v_1, v_2, …) exists that every vector v can be represented as a convergent series.
Ah that makes sense, regular definition of basis is not much of use in infinite dimension anyways as far as I recall. Wonder if differentiability is required for what you said since polynomials on compact domains (probably required for uniform convergence or sth) would also work for cont functions I think.
regular definition of basis is not much of use in infinite dimension anyways as far as I recall.
yeah, that’s exactly why we have an alternative definition for that :D
Wonder if differentiability is required for what you said since polynomials on compact domains (probably required for uniform convergence or sth) would also work for cont functions I think.
Differentiability is not required; what is required is a topology, i.e. a definition of convergence to make sure the infinite series are well-defined.
i just checked and there’s official names for it:
- the term Hamel basis refers to basis in linear algebra
- the term Schauder basis is used to refer to the basis in analysis sense.
Having studied physics myself I’m sure physicists know what a derivative looks like.