Honestly, e belongs in a calculus course, not in algebra 2. Presenting it prematurely makes it seem magical and incomprehensible. Why the rush to get e in before students are ready for it?
This constant may be chosen to be 1, which immediately simplifies everything marvelously, and feels like the obvious "best" value. Doing this corresponds to making e the base for the exponentiation operation. That is, it gives access to approximations e, by the values f(1).
-
-
Base 10 logarithms were the norm in the 17th century, but a table of the natural logarithm of all integers from 1 to 1000 was published already in 1622 (Speidell, New Logarithmes), agreeing with the modern ln(x) to six decimal places (though the table omits the decimal point).
-
Leibniz appears to have been the first to actually write down e in decimal form, in 1691. He was occasioned to do so in connection with the catenary y=(e^x+e^-x)/2. Here is the source:https://books.google.nl/books?id=7iI1AAAAIAAJ&dq=%221.0000000%2C%202.7182818%22&pg=PA361#v=onepage&q=%221.0000000,%202.7182818%22&f=false …
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.