简体   繁体   中英

How to use polynomials instead of bits to improve the performance?

I have a 128-bit string, and my supervisor has asked me to represent those 128 bits as a polynomial. This is a scan of the paper he was writing on:

将位转换为多项式

His idea is, since we are eliminating the 0s from these bits, we will be able to perform the next operations (most of which are XOR between the bits/polynomials) much faster than if we worked on all bits.

I understand what the requirement is, and I can do it on paper, and also in the application. But my way will not achieve his goal, which is improving the performance. He actually said that there are libraries already that do this, but unfortunately I couldn't find any. The only thing I found was a Polynomial class that evaluates polynomials, which is not what I want.

So do you guys know how can I implement this to improve the performance? Any code/snippets/articles is very much appreciated.

The application is written in Java, if that makes any difference.

Thanks,

Mota

Update:

My supervisor says that this C library will do the task. I couldn't though figure out how it works and how it will do this.

Is your supervisor familiar with a BitSet ? 128 bits is 16 bytes, which could be stored as 2 longs . However, using a BitSet, you won't need to worry about dealing with the combination of 2 longs . BitSet also provides methods for all the common bit operations. I think you'd be pretty hard-pressed to find a better solution than this.

The polynomial approach is a pretty cool idea, but I think it's more theoretical than practical.

What he's proposing is a Monomial that you can compose into Polynomial - think Composite pattern. Define all the usual math operations for both (addition, subtraction, multiplication, division) and any others you think you might need (eg differentiation and integration).

Polynomial will really shine for cases like x^1000 + 1 , because you can capture it with just two terms.

The real question is: What does your boss imagine you're saving? A few bits? Speed? Development time? I agree with ziesemer - you'll be hard pressed to do much better than a BitSet . The savings s/he's thinking of seems like a micro-optimization to me, the kind of thing that wouldn't show up if you profiled your application.

But s/he is the boss...is it worth the argument?

Maybe you can abstract this detail away and profile the two.

If you have multiple strings that need XOR ing it will actually cause performance overhead. This is because you need to match the weights.

It all depends on what you are XOR ing it with and how many times you have to do this to calculate the advantage to take this approach.

I would go with ziesemer's solution.

I doubt very much you are going to get any benefit on 128-bit strings. 128 bits are 16 bytes; any object will take at least 40, so (almost) any supporting structure will be more expensive than the data it stores.

Still, here's a good survey of existing methods—and one novel one, which may (or may not) be beneficial for you. Not as if it was an answer to your question, and agree that but ziesemer's solution is the best for such a huge numbers, but if you want to broad your horizons, take a look.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM