Rosetta@home BOINCstats Challenge - Rosetta@home Against Corona!

NorthAlabamaCharitable

Member
USA team member
This contest will be interesting, considering R@h is currently out of work (except Portable Devices) and struggling to meet the recent increased demand. Good luck everyone!
 

Nick Name

Administrator
USA team member
I sent a team PM Friday afternoon, in my haste I sent the wrong time in Eastern: it's 8am not 4pm. Math was always my worst subject. At least I got the actual UTC time correct! :p Thanks doneske for letting me know.

Everyone will be in the same boat if we can't get work but it does kind of put a damper on things.
 

doneske

Well-Known Member
USA team member
I keep seeing all these messages about contributors being out of work. I have never been below a 1000 WUs in progress. Currently, sitting with 1418 in the queue. Am I the one that is sucking them all up? I'm sort of embarrassed. However, if I'm the only one with work, that sort of determines which team finishes first in the challenge ;)
 

DrBob

Administrator
USA team member
Late upload this morning but I'm on the boards now. :)
All machines still have work here.

CRUNCH ON! :USA:
 

doneske

Well-Known Member
USA team member
We opened at #6 this morning and now at #4....

One of my machines is starting to run dry but the others probably have a good six hours of work before they start draining down. Maybe there will be more work by then.
 

Nick Name

Administrator
USA team member
I keep seeing all these messages about contributors being out of work. I have never been below a 1000 WUs in progress. Currently, sitting with 1418 in the queue. Am I the one that is sucking them all up? I'm sort of embarrassed. However, if I'm the only one with work, that sort of determines which team finishes first in the challenge ;)
Let's hope! :USA:My guess is that the Unraid and Rackspace teams are the ones with the big servers, either the companies themselves or customers or both. Rackspace only shows 22 members but they've popped into the top 10 out of nowhere.

From what I've read, only Android work is currently available. That explains the "No tasks sent" message I've been seeing in the logs, instead of the normal "no tasks available" message. I set up a backup project, ODLK since I had never crunched it and the tasks are short, but it's out of work now too. :ROFLMAO:
 

doneske

Well-Known Member
USA team member
I have setup WCG as a backup project and limiting it to SCC which runs under 2 hours per unit. If work comes back, SCC will drain down quickly.
 

doneske

Well-Known Member
USA team member
Now moved up to #3 with the last update... Could it be the other team's machines are running dry.
 

Nick Name

Administrator
USA team member
We opened at #6 this morning and now at #4....

One of my machines is starting to run dry but the others probably have a good six hours of work before they start draining down. Maybe there will be more work by then.
I'm mildly troubled by reports of long running tasks, which are now preferred by the project, giving extremely low credit. I've seen reports of 6 - 8 points for an 8 hour task. I've got a handful of such tasks myself. :unsure: They'll need to get that fixed if they really want folks to run them 16+ hours. Supposedly they're looking into it.

*Apparently, they've reverted the default run time BACK to 8 hours, which is much more reasonable for most people. That's about the most risk I'm willing to assume.

I can't find the post now but Brian Coventry, one of the main researchers issuing COVID-19 work said creating new work is time and labor intensive. I think they're also somewhat hamstrung by local travel restrictions etc. in Seattle, so new work may take awhile.
 

Nick Name

Administrator
USA team member
Now moved up to #3 with the last update... Could it be the other team's machines are running dry.
I had a few tasks in the queue, I shut off network access when I got home last night. If I'd been thinking I could have done that yesterday. :cry: Anyway, that helped a little. I think I'll set SCC as a backup as well.
 

Nick Name

Administrator
USA team member
Earlier tonight I noticed my Linux box was getting work but my Windoze systems weren't, so I spent a few hours trying to get Linux running on #3. Long story short it's back to Windoze. It wasn't getting any Rosetta work anyway, and still isn't, so I don't think the downtime really cost me.
 

doneske

Well-Known Member
USA team member
One of my machines is totally WCG now. 2 others are about 50/50 Rosetta/WCG. I have about 580 jobs left in total but have been getting a reasonable number of resends that keeps the machines busy on Rosetta. Unless something happens, they will probably be dry of Rosetta by morning.
 

BeauZaux

Active Member
USA team member
Newbie question. Running WCG on some boxes while waiting for Rosetta tasks. I read somewhere to set share (project weight) for WCG to zero to stop it from hogging resources. My question is, are there any settings to allow a prioritized project to force another project to suspend when tasks are available for the prioritized project? I'm not positive, but a few weeks ago it looked like WCG did that on its own as i was selecting projects to run.
 

NorthAlabamaCharitable

Member
USA team member
We've been getting a few Rosetta tasks here and there, but it's still sporadic. Right now at 57 in progress (we can support up to 800 tasks currently).
 
Last edited:

doneske

Well-Known Member
USA team member
Newbie question. Running WCG on some boxes while waiting for Rosetta tasks. I read somewhere to set share (project weight) for WCG to zero to stop it from hogging resources. My question is, are there any settings to allow a prioritized project to force another project to suspend when tasks are available for the prioritized project? I'm not positive, but a few weeks ago it looked like WCG did that on its own as i was selecting projects to run.
I know that setting the resource share to 0 tells BOINC to get tasks from WCG only when it can't get tasks from any other project. What I have noticed is, I only get enough WCG to keep the threads active. No queue. Regardless of the Attach Every # of Days setting. I don't think a Project preempts another. What I have done is only selected SCC at WCG because they run in less that 2 hours. That way, if Rosetta dumps in a lot of work, the machine will be free of WCG work within 2 hours. If you select ARP or FAH that run in 5 or more hours, you most likely will have to wait for those to finish before Rosetta work starts. It's probably a good thing that Projects don't get suspended as that would just create a bunch of resends at the other project potentially. Once the work is downloaded, the client manages the work based on deadlines, so if you downloaded a lot of FAH from WCG, they would most likely preempt Rosetta as they have a 1 day deadline. That's my best guess...
 

doneske

Well-Known Member
USA team member
Just downloaded 1000 WUs from Ralph@Home. Testing new Rosetta 4.14 program. New work doesn't prempt WCG. As the SCC work ends, the Ralph work starts
 
Top