BrainDen.com - Brain Teasers
• 0

## Question

"We should just make base-10 computers," Bob commented to his friend Randy. "Rather than old base-2, which they only made because they used on/off vacuum tubes. We have the technology today to build faster computers... think of a base-10 computer! It would be five times faster than a base-2 computer - think about it. What could take up a bazillion digits in base 2 - or should I say bits, of 0 or 1, would take up much less in base-10. Five times less digits, five times more efficient!"

"That's not necessarily true," mused Randy, scratching his chin. "We're defining Base Efficiency as this, where x is the number to be compared, y is the larger base and z is the smaller base: "

Bef(x) in y|z = [digits of x in base z] / [digits of x in base y]

For example, 5 in base 10 takes up 1 digit ("5") whereas, in base 2, it takes up three digits ("101"). Thus:

Bef(5) in 10|2 = 3/1 = 3

"That's not 5x the storage efficiency," Randy added. "That's only three. Look at 10, that's 4/2, or 2. One is only 1/1! No, I think the average overall efficiency of base 10 compared to base 2 is...."

What does Randy say next? Why? Also, what's the largest efficiency you can get in 10|2, and what number(s) give you that efficiency?

Use spoilers and show your work

## Recommended Posts

• 0

I haven't spent more than a few minutes thinking about this, but an early guess is that

is slightly higher than 3, perhaps 3-1/3. This is based off the early powers of 2- 1, 2, 4, 8, 16, 32, 64, 128, 256, 512...that is the first 10 powers of two, which in binary would be written as 1000000000, and that creates a 3-digit base 10 number (512), and then the pattern seems to repeat, with 1024(1), 2048(2), 4096(4), 8192(8), 16384(16), 32769(32) etc...and 10/3 equals 3-1/3, which would be average efficiency.

I believe is 4. That would be 8, which in binary is 1000.

I don't think I did a very good job explaining myself, hopefully you guys could follow that.

##### Share on other sites
• 0

The higher the number to compare the higher the efficenty.

By averaging the efficenty I assume you mean average over all possible numbers, then the efficenty approaches infinity.

##### Share on other sites
• 0
The higher the number to compare the higher the efficenty.

By averaging the efficenty I assume you mean average over all possible numbers, then the efficenty approaches infinity.

I don't think efficiency approaches infinity. Because if you multiply a number by 2 four times (as in, multiply it by 16), then in base 10, it has gotten at least 1 digit larger, possibly two digits larger (ex. 9*16=144, which is two digits larger than the original number 9), and in base 2 it has gotten 4 digits larger. Therefore, efficiency is somewhere between 3 and 4.

##### Share on other sites
• 0

taliesin: incorrect

rossbeemer: you're close

It is somewhere between 3 and 4, you're right there.

An equation I came up with:

Bef(x) in y|z = [logzx] / [logyx]

where the brackets [] are the ceiling function, ie, [3.7] = 4, [18.01] = 19, [10] = 10, etc. Basically round up

That of course just explains y|z for one value (x), it doesn't calculate the average

Oh and you're right about the maximum, though you're missing one number: in 10|2, the values of 8 and 9 both give an efficiency of 4/1

##### Share on other sites
• 0

I had thought about a log equation similar to that, but like you said, that only determines efficiency for 1 value. I'm guessing it ultimately leads to a pattern so you can calculate the average. I'll have to think about it for a little while

Good puzzle!
##### Share on other sites
• 0

Back in my college days - ahem, that would be the '60s - Richard Halvorson, a professor friend of mine, gave a lecture on ternary logic.

The subject was relevant, because of his [then] current work on tri-state [no, not NY, NJ, CT] logic devices with states = {-1, 0 1}.

Aside from the drawback of the obvious name for a Ternary digIT, ternary logic offered an efficiency boost over binary just as you suggest here.

But to do decimal logic, you'd need circuits [for both CPU and RAM] that could handle 10, rather than 2, distinguishable states. As a practical matter, some of the efficiency gain [and logarithmic seems to offer the best estimate of that gain] would likely be chewed up in the added complexity of computing with more content-rich digits. It turns out that computational complexity is loosely governed by physical laws, and nothing comes completely for free. The best you can do is optimize, that is find the process that minimizes wasted effort, and binary doesn't have much waste.

Interesting question, tho, and I'd go with a logarithmic gain of efficiency.

##### Share on other sites
• 0

Without looking at bonanova's spoiler...

log base 2 of 10, which is 3.321928095, for average efficiency.

##### Share on other sites
• 0
Back in my college days - ahem, that would be the '60s - Richard Halvorson, a professor friend of mine, gave a lecture on ternary logic.

The subject was relevant, because of his [then] current work on tri-state [no, not NY, NJ, CT] logic devices with states = {-1, 0 1}.

Aside from the drawback of the obvious name for a Ternary digIT, ternary logic offered an efficiency boost over binary just as you suggest here.

But to do decimal logic, you'd need circuits [for both CPU and RAM] that could handle 10, rather than 2, distinguishable states. As a practical matter, some of the efficiency gain [and logarithmic seems to offer the best estimate of that gain] would likely be chewed up in the added complexity of computing with more content-rich digits. It turns out that computational complexity is loosely governed by physical laws, and nothing comes completely for free. The best you can do is optimize, that is find the process that minimizes wasted effort, and binary doesn't have much waste.

Interesting question, tho, and I'd go with a logarithmic gain of efficiency.

Yes, the original riddle was going to be something like this: "It costs 4.5 times more money [just a BSed number] for a computer chip to work in base 10 than in base 2... since it can hold 5 times the space, there's a gain in money" but of course that's wrong, because the MAXIMUM efficiency for 10|2 is 4 (at 8 & 9). But then I just decided to make the riddle purely mathematical.

##### Share on other sites
• 0

Now I've read bonanova's spoiler, and I was wondering about that. I figured that there would probably be too much efficiency lost with the computer needing to recognize 10 digits as opposed to just 2, which is probably why base-2 computers are still used today. Interesting thoughts about {-1,0,1} though.

##### Share on other sites
• 0
Now I've read bonanova's spoiler, and I was wondering about that. I figured that there would probably be too much efficiency lost with the computer needing to recognize 10 digits as opposed to just 2, which is probably why base-2 computers are still used today. Interesting thoughts about {-1,0,1} though.

Yes, I used to think that base-2 was only used because that's what's been used since the origin of computers, however base 2 is actually the most cost efficient for storing numbers unless they can make base 10 storage cost about 3 times more or LESS than 3 times more. Which is why this problem has a lot of practical application

edit: You saw my equation in post #5, right?

Edited by unreality
##### Share on other sites
• 0

Parallel processing is akin to computing with higher complexity digits.

And you see there that the increase in speed is accompanied by increased power consumption.

So gains of "efficiency" defined in terms of MFlops/Watt is hard to achieve.

Speed, however, yes.

##### Share on other sites
• 0

Yeah... base-2 is one of the most practical, if not the most, at least with the technology we have today. This problem is focused entirely on the storage efficiency of digits in one base over another, regardless of whether you're using a computer or scratching symbols in the dirt. Actually this problem is specifically geared to the overall average digit-storage efficiency of base 10 over base 2

##### Share on other sites
• 0

I did see your equation in post 5, but I'm not sure how exactly to use it to calculate an average. You saw my revised guess, right?

Also, do you guys know anything about quantum computers, using the spin of an electron to store data? I don't know too much about them, but they sound pretty awesome.

##### Share on other sites
• 0

Yeah, the qubit can be 0, 1 or the superposition of 0 AND 1 ;D Not sure what the difference there is between that and tertiary though, since isn't 0/1 kind of like 2, or can you separate 0/1 into ones and zeroes or something?

Ross: This is as far as I know, cuz I made a program that uses my one-time equation in post 5 to calculate averages I calculated 10|2 out to the 10,000th iteration and the average was around 3.17 or something like that (earlier it was 3.14 and I thought it might be pi, I got excited, but no ), so unless it converges veeeery slowly, it's not going to be 3.22ish that log base 2 of 10 is. Hmmm... I will think more about this though

##### Share on other sites
• 0

It's possible that it converges very slowly, but I somewhat doubt it. I was going to write a program myself, but you beat me to it. Do you have any ideas why it might be 3.17?

##### Share on other sites
• 0

It was actually 3.17623. The totaled efficiency for x from 1 to 10,000 was 31762.3, then it divides by 10,000. As for why, I have no idea. 100 iterations is about 3.0something, and 1000 was 3.14 I think (that got me pi-excited, but alas it was not to be so ;D) and 10,000 was 3.17623 and took forever on my graphing calculator [yes I made it with my graphing calculator lol].

Because of this slow convergence, and because there really is no definition of 'slow' when you're looking at infinity (), I'm thinking you may be right about it being log base 2 of 10. After dinner, I'll think more on it and possibly make a new program on my computer and calculate to more iterations

##### Share on other sites
• 0

The number 9 gives maximum efficiency i.e. 4/1 =4

##### Share on other sites
• 0
The number 9 gives maximum efficiency i.e. 4/1 =4

both 8 and 9 give the highest effiency of 4 when you're doing 10|2. Good job!

On a different note, can you PROVE that that's the absolute highest efficiency, without a doubt?

##### Share on other sites
• 0
Yeah... base-2 is one of the most practical, if not the most, at least with the technology we have today. This problem is focused entirely on the storage efficiency of digits in one base over another, regardless of whether you're using a computer or scratching symbols in the dirt. Actually this problem is specifically geared to the overall average digit-storage efficiency of base 10 over base 2

All mathematics aside, to use a base-n memory system, wouldn't you need an n-state transistor? I'm not saying it couldn't be done, but I'm guessing an optimized n-state transistor would be similar in size, speed, and power consumption to log_2(n) MOSFET's.

There is also the computational efficiency to consider. The rules of arithmetic get comparatively ugly for bases larger than 2. In binary, it pretty much boils down to boolean logic, which is pretty hard to beat for speed and efficiency.

##### Share on other sites
• 0

Yep, exactly I think we've pretty much established that base-2 is the best practically-speaking. But mathematically, base y has a higher digit efficiency over base z provided y>z, and that's what this problem is delving into

##### Share on other sites
• 0

So have we reached a conclusion on this yet? If not, I will do my best to write an excel or C++ program (I don't think my TI-83 could handle it) to calculate efficiency up to, say, the number 10 million, and try to determine where the average converges.

##### Share on other sites
• 0

Sounds good Let me know how it turns out

##### Share on other sites
• 0

I don't quite see how increasing number of states can win anything in speed. Computers don't do their process one bit at a time. On each clock pulse the state changes. If you had a simple 1000-state machine you'd need a 3-digit (bit) register to represent that state in decimal and a 10-digit (bit) register to represent state in binary. So you'd use less space, but no gain in speed. The engineering part of a 10-state element would be a technological nightmare.

Parallel processor is not an exact analogy here. Each processor has its own clock.

Anything you can do in decimal, you can do in the same amount of time in binary. Takes more space, but the design is much simpler on all levels.

##### Share on other sites
• 0

I agree Prime We know that base2 is the best, we haven't said differently. And we're talking about purely mathematical digit storage here, not speed or anything else ;D

##### Share on other sites
• 0

I wrote a program to calculate the ratio. I got that up through 80,000,000

3.2212085607383023

This is about as high as my program will be able to go without sacrificing accuracy. But even at these numbers the ratio doesn't seem to be converging very quickly. For comparison the ratio at 40,000,000 is

3.1496475646943716

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.