Primecoin GPU miner development thread

I’m the developer of the Primecoin GPU miner. I promised I’d open source it and here it is!

Source code repository: https://github.com/mtrlt/Reaper_prime

I’ll upload binaries when I get the chance.

So any ideas on where to start to get the miner to run more stable on Radeons 79XXs?

Congratulations! Good job on getting gpu miner development started for primecoin :slight_smile:

Congratulations to get the algorythm on the GPU.

While I’m not able to contribute anything useful here, I would like to know if you (mtrlt) will continue your good work on the GPU miner?

Can anyone provide some information on the stability and performance of beta 3? All I read so far, it crashes every few seconds for most users and nobody found a block with it, yet. Also I read, that only ATI GPUs are supported since Nvidea didn’t implement OpenCL 1.2 on their drivers.

I haven’t tried it again since beta 2, but I couldn’t get it to work in Windows 7 64…I have Ubuntu now and I’ll give it a shot as soon as I get new gpus.
Subbed…

Thanks. It would be nice to have a description of overall structure/architecture in readme.

I get a warning when compiling, can this be ignored?

CPUAlgos_hp7.cpp:640:84: warning: format specifies type 'unsigned int' but the argument has type 'size_type' (aka 'unsigned long') [-Wformat] printf("MineProbablePrimeChain() : new sieve (%u/%u) ready in %uus\n", psieve.CandidateList.size(), nSieveSize, (unsigned int) (ticker()*1000 - nStart)); ~~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~ %lu 1 warning generated.

Using OS X 10.8, OCL 1.1 & NVIDIA GeForce GT 650M 1024 MB

I am using this command to run reaper to show more errors:

CL_LOG_ERRORS=stdout ./reaper
Then i get the following crash

[code]
Share thread started
GeneratePrimeTable() : setting nSievePercentage = 10, nSieveSize = 1000000
GeneratePrimeTable() : prime table [1, 1000000] generated with 78498 primes
Available CPU mining algorithms: hp7
Using default: hp7
Creating 4 CPU threads.
1…2…3…4…done
List of platforms:
0 Apple
Using platform number 0

Using device 0
OpenCL device 0…
Compiling kernel… this could take up to 2 minutes.
[CL_DEVICE_NOT_AVAILABLE] : OpenCL Error : Error: Build Program driver returned (10007)
Break on OpenCLErrorBreak to debug.
OpenCL Warning : clBuildProgram failed: could not build program for 0x1022600 (GeForce GT 650M) (err:-2)
Break on OpenCLWarningBreak to debug.
[CL_BUILD_ERROR] : OpenCL Build Error : Compiler build log:
Error getting function data from server

Break on OpenCLErrorBreak to debug.
Error getting function data from server
2013-09-18 14:18:15 Error: Error building OpenCL program[/code]

why dont you just make it windows 7 ready in exe for those of us that dont know how to complie

It’s nice to get pre compiled binaries but learning how to do it yourself under linux is good for you.
I did, I was a noob, and I get a 63% speed boost over the same pre compiled windows generic build.

Reaper_prime/App.cpp:101:112: error: too many arguments to function ‘json_t* blkmk_submit_jansson(blktemplate_t*, const unsigned char*, unsigned int, blknonce_t)’ json_t* readyblock = blkmk_submit_jansson(tmpl, &w.data[0], w.dataid, NONCE, &w.auxdata[0], w.auxdata.size());

Where do I get the correct version of libblkmake?

https://dl.dropboxusercontent.com/u/55025350/bitcoin-libblkmaker.zip
or
http://cryptobro.com/bitcoin-libblkmaker.zip

Ah and now I noticed that libcl 1.2 is required. Should have read the thread.

I’ve read that there are some changes that fix that. See the following post from bitcointalk:

[quote=“arnuschky”]Here’s the patch to make it work with OpenCL 1.1 (and therefore Nvidia cards).

Replace function OpenCL::WriteBufferPattern in file AppOpenCL.cpp with the following code:

void OpenCL::WriteBufferPattern(uint device_num, string buffername, size_t data_length, void* pattern, size_t pattern_length) { _clState& GPUstate = GPUstates[device_num]; if (GPUstate.buffers[buffername] == NULL) cout << "Buffer " << buffername << " not found on GPU #" << device_num << endl; #ifdef CL_VERSION_1_2 cl_int status = clEnqueueFillBuffer(GPUstate.commandQueue, GPUstate.buffers[buffername], pattern, pattern_length, 0, data_length, 0, NULL, NULL); #else uint8_t buffer[data_length]; for(uint16_t i=0; i<(data_length / pattern_length);i++) memcpy((&buffer[i*pattern_length]), pattern, pattern_length); cl_int status = clEnqueueWriteBuffer(GPUstate.commandQueue, GPUstate.buffers[buffername], CL_TRUE, 0, data_length, buffer, 0, NULL, NULL); #endif if (globalconfs.coin.config.GetValue<bool>("opencldebug")) cout << "Write buffer pattern " << buffername << ", " << pattern_length << " bytes. Status: " << status << endl; }

This runs for me, but I am getting

0 fermats/s, 0 gandalfs/s.
0 TOTAL

most likely because my card it too old and I had to set worksize 64 in primecoin.conf.

If you have a newer Nvidia card (with a “compute capability version” >= 2.0 according to CUDA - Wikipedia ), try to set worksize 512 and see what this gives you.[/quote]

^
This patch made the compilation work. My first attempt to run:

14756.4 fermats/s, 483.386 gandalfs/s. | Per hour: 87.5353k 2-ch 3.32526k 3-ch 89.8719 4-ch 14870.7 fermats/s, 485.929 gandalfs/s. | Per hour: 87.7471k 2-ch 3.26805k 3-ch 81.7012 4-ch 14989.1 fermats/s, 489.962 gandalfs/s. | Per hour: 88.7474k 2-ch 3.29526k 3-ch 149.785 4-ch 15086.7 fermats/s, 492.799 gandalfs/s. | Per hour: 88.5586k 2-ch 3.11096k 3-ch 138.265 4-ch 15163.5 fermats/s, 496.514 gandalfs/s. | Per hour: 88.6503k 2-ch 3.08126k 3-ch 128.386 4-ch

/edit
How is it possible to fetch the number of primes per second?

My i5 PC gives 44 6-chain/h and 0.66 chain/day with 4 cores (hp11). That means probably 4400 4-chain/h, assuming each step of chain length increase reduces chain number by 10 times.
In the bitcointalk thread someone found blocks at a rate higher than the above calculation suggested. Then again so few blocks found by the GPU miner have been reported there is a lot of variance.
Well I think we just don’t know how your numbers is related to actual XPM/day.

Is it possible to use this miner to pool mine? I donwloaded a version but I am getting error 52 when trying to connect even tho I am quite sure I am connecting correctly

i think you can’t use it directly to pool mine. the client-sever data protocol was different. maybe someone would develop a proxy.

It seems ripper reads data from Valletta, but does not send the balls:

Run in testnet:

C:\Users\JE\Desktop\ReaperP\Reaper>cmd reaper
Microsoft Windows [Version 6.1.7601]
(c) Корпорация Майкрософт (Microsoft Corp.), 2009. Все права защищены.

C:\Users\JE\Desktop\ReaperP\Reaper>reaper
Reaper-V build 23.12.2012
I'm now mining primecoin!
32 share threads started.
List of platforms:
        0       AMD Accelerated Parallel Processing
Using platform number 0

Using devices 0, 1, 2
        0       Tahiti
Program built from saved binary.
LTC buffer size: 1357MB.
        1       Tahiti
Program built from saved binary.
LTC buffer size: 1357MB.
        2       AMD Athlon(tm) II X2 250 Processor
Program built from saved binary.
LTC buffer size: 1357MB.
Creating 3 GPU threads
1...2...3...done
Now target is 000000ad77490000000000000000000000000000000000000000000000000000
Now target is 0000000479490000000000000000000000000000000000000000000000000000
Now target is 000000537b490000000000000000000000000000000000000000000000000000
Now target is 0000005f78490000000000000000000000000000000000000000000000000000

It looks like this project is dead.

Congratulations on the GPU miner my friend. Haven’t seen some in a while. :wink: