20110809, 21:03  #89  
"Lucan"
Dec 2006
England
1100101001010_{2} Posts 
Quote:
Who am I to suggest the precise schedule two prolific TFers should adopt? Once the TF (72 bit) wavefront is comfortably ahead of LLs being dished out, it won't matter one iota. BTW the most recent query "Will Prime95 keep going till the assignment is finished?" has given me a thought: How many folk out there think that an incomplete LL test is better than nothing? Of course saving the residue and iteration number is a suggestion that has been raised many times before. What is the answer? Could this partly explain why 80% of LLs are expired/returned? David Last fiddled with by davieddy on 20110809 at 21:08 

20110809, 21:25  #90 
If I May
"Chris Halsall"
Sep 2002
Barbados
7·1,423 Posts 
Hey, nuclear fusion has been working only 8.3 light minutes away for 5 billion years or so.
We humans are now able to bring temperatures down to very close to absolute zero, and much higher than the core of our (and any other) sun. It is only a matter (no joke intended) of time before we are finally able to sustain a controlled fusion reaction from which we can derive energy. (As we all know (or should know), fusion reactions have already "successfully" been used here on earth on purpose. But, sadly, they were not used for peaceful purposes....) Last fiddled with by chalsall on 20110809 at 21:30 Reason: Pedantic clarification. 
20110809, 22:46  #91  
"Lucan"
Dec 2006
England
2·3·13·83 Posts 
Quote:
I observed that the frequency of P1 finds dropped off approximately exponentially with bit length. I expressed this quantitatively, and requested a (preferably simple) theoretical explanation. David Last fiddled with by davieddy on 20110809 at 22:47 

20110809, 23:26  #92  
Dec 2010
Monticello
5·359 Posts 
Quote:
Once the automatic connection to the server is working in mfaktc, this will become a far less critical question, and the server will be able to do what it does now, which is to hand out assignments in more "critical" areas to those users and machines that in the past have completed them and turned them in quickly. I hadn't thought about turnaround, but I recently had a 10 day period where I turned in no work (travelled to San Jose), and probably had 15 days of work queued up. This having two weeks of work to do and getting it done over 23 weeks is quite normal for the machine I have borrowed at the local University; I don't visit it every day and can't visit it now until I get out from under a small flood at home. E. If we are getting an 80% incomplete rate on 53M LL assignments, then I do suggest (without offering the requisite resources, of course!!) that the server be contacted after each 10% of the assignment with enough information for a different machine to complete the assignment. Either that, or start out new Prime95 clients with something smaller that actually completes in a few days, like P1. Last fiddled with by Christenson on 20110809 at 23:29 Reason: idea for halfchewed assignments 

20110810, 04:22  #93 
Jul 2003
Thuringia; Germany
2×29 Posts 
first guess
Hi!
My English isn't very well, so let us concentrate on the math. The main problem is determining the probability, that P1 finds a factor under the condition, that trial factoring didn't. So for the first guess let us assume, there is no second stage in P1. And we will find a factor P of 2^p1 iff the k in P=2kp+1 is B1smooth. (Actually the k has to be B1powersmooth, but let us ignore this little difference.) The probability, that a random k is B1smooth can be evaluated by the DickmansFunction delta(ln k/ ln B1), because k^(1/u) with u= lnk / ln B1 evaluates to k^(ln B1/ ln k)=B1 and one gets the probability of k being B1smooth as k goes to infinity. The probability, that such a P divides 2^p1 is 1/P or nearly 1/2kp. The probability, that this P is prime is (PNT) about 1/ln(P) or 1/(ln(2)+ln(p)+ln(k)). Combining these probabilities under assumption of independence gives us the probability, that a given P is a prime factor of 2^p1 and is found by P1factoring with bound B1(=B2): P(k)=delta(ln k/ ln B1) / ( (ln(2)+ln(p)+ln(k)) * (2kp) ). This we have to sum up from the least k, that P=2kp+1>B  the bound of trial factoring  to infinity. (There are several simplifications in it. First for high P=2kp+q the probability of dividing 2^p1 is not 1/P but 0 if P>2^p1. But the term converges very fast to 0, so that the failure for summing up to infinity is low. Second there could be found more than one factor, so we have to subtract these probabilities. But they are also very low, so for simplicity we ignore these cases.) For computing the probability that a factor P of 2^p1 is found with P1 factoring with bounds B1=B2 under the condition that trial factoring to bound B failed is approximately For large k the sum ln(2)+ln(p)+ln(k) is dominated by ln(k) and so the integral simplifies to This can be computed numerically. Am I right this far? Or are the assumptions for simplicity to hard? Cyrix Last fiddled with by cyrix on 20110810 at 04:26 
20110810, 09:26  #94  
"Lucan"
Dec 2006
England
2×3×13×83 Posts 
Quote:
I also hope Bob will give you full marks for effort, and hopefully for proficiency too! Now does your result explain (simply!) an exponential drop in factors found by P1 with bitlevel, "half life" ~8 bits? David 

20110810, 09:54  #95  
"Lucan"
Dec 2006
England
2×3×13×83 Posts 
Quote:
Quote:
Only 99,999,998 digits to go, and QC+Shor will be factoring all the Mersenne numbers between 332M and 500M bits like s*** off a shovel. If it finds one it can't factor, it might be prime! BTW I think my "fusion" analogy might be closer than I had originally intended: Difficulty with "Containment"? David Last fiddled with by davieddy on 20110810 at 10:05 

20110810, 10:21  #96 
"Lucan"
Dec 2006
England
2×3×13×83 Posts 
Lies, Damned Lies and Statistics:
http://www.mersenne.info/trial_facto...ta/2/50000000/
http://mersenne.org/primenet/ We thought 72 bits of TF was a "conservative" target. How are "we" doing? Marks out of ten please. David 
20110814, 19:12  #97 
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
3×29×83 Posts 
Mr. (Dr. ?) Silverman, I just want to point out that this thread has long been removed from the mathematics forum, so if you don't care about the actual bit depths, please leave this thread. If not, then please ignore that.
The current answer is that we are going to bit levels deeper than what's listed at /various/math.php, although I recently realized that the list doesn't go beyond exponents of 100M. My next question then became what are the methods that George used to determine the bit depths? He hasn't been here since I asked that, but if anyone else knows, please post. 
20110814, 19:55  #98 
Dec 2007
Cleves, Germany
2×5×53 Posts 
There's not too much math involved into determining the bit depths at present, if any. Going to the tabulated depth, the TF wavefront got too far ahead of the LL wavefront, and George decided to TF one bit deeper for exponents between 53M and 70M, or some such. That process has repeated at least once thus far and I expect it to be repeated time and again as long as enough people do TF.

20110814, 23:53  #99 
Dec 2010
Monticello
5·359 Posts 
George's original motivation was noting the TF effort for a given, fixed bit level is proportional to 1/log N. He then solved a minimisation problem: Given, say, 10,000 prime mersenne exponents, how do I eleminate them with the least effort, ON AVERAGE.
We all know that the answer is that it happens at the amount of effort on each type of work where the answer doesn't matter as to which type of work it did. On some single CPU, probably his prototypical 1GHzDay per day CPU, he found that would happen at the bit depths he tabulated. Along came GPUs and mfaktc, which TFs 10100 times faster than CPUs, and the GPUs have all been crowding into the easy, fast work of TF (as opposed to CUDALucas), and you have the present situation. I expect this process to stop after 36 more bit levels, particularly as CUDALucas matures....and the TF work becomes equally hard as it was before, since each bit level of TF doubles the amount of compute work involved. At the moment, I have 1 mediumpower GT440 card doing TF, (the other is doing Operation Billion Digits exclusively ATM), and it is producing about as many factors as half a dozen machines of various speeds doing P1. The haul on the 666 exponents ckdo handed me looks to be 7 or 10 factors in the last two weeks; each of these factors found represents an LLD test that won't need to be done. I'll be done with that the day after tomorrow. 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Current recommended TF bit depth?  endless mike  GPU Computing  3  20150807 23:00 
Trial Factor Bit Depth  lavalamp  Operation Billion Digits  8  20100802 18:49 
Sieve depth vs. prime probability  Unregistered  Information & Answers  2  20100525 20:51 
optimality of ecm depth  mklasson  Msieve  2  20090308 20:18 
Current Factor Depth  JHagerson  Lone Mersenne Hunters  60  20070617 22:35 