tag:danbooru.me,2005:/forum_topics/15973 Userscript what to bear in mind?/Bitcoin 2019-08-24T19:47:48-04:00 tag:danbooru.me,2005:ForumPost/155142 2019-03-05T09:29:08-05:00 2019-03-05T09:29:08-05:00 @kittey: > Bonsai7 said: > > Since I have no intention... <blockquote> <p>Bonsai7 said:</p> <p>Since I have no intention of harming the service I would like to know if its ok using my Script to download a good amount of pictures from here?</p> </blockquote><p>If you don’t want to harm the service, use the <a class="dtext-link dtext-wiki-link" href="/wiki_pages/help%3Aapi">API</a>, as ehh mentioned. Generating full HTML pages puts a lot of extra strain on the servers and is completely unnecessary if you only want to retrieve machine-readable data. Don’t parallelize and don’t try to circumvent any limits you might encounter and you should be fine.</p><p>Btw, the only way to increase the limits imposed on your account is by upgrading it via credit/debit card. If that’s not an option for you, you’ll have to make do with the <a class="dtext-link dtext-wiki-link" href="/wiki_pages/help%3Ausers">basic member account limits</a>.</p> kittey /users/320377 tag:danbooru.me,2005:ForumPost/155128 2019-03-04T21:05:33-05:00 2019-03-04T21:05:33-05:00 @ehh: > In summary if a page has 10 pictures I have... <blockquote><p>In summary if a page has 10 pictures I have 21 Requests(1 to get every href on the page,10 to get every imageadress and 10 to download the image) per Page.</p></blockquote><p>You could reduce the amount of requests you make by using the <code>data-file-url</code> attribute, e.g.:</p><pre>for i in soup.select("article[data-file-url]"): image = requests.get(i['data-file-url']).content ... </pre><p>But it's probably simpler to just use <a class="dtext-link dtext-wiki-link" href="/wiki_pages/help%3Aapi">the API</a>.</p> ehh /users/499533 tag:danbooru.me,2005:ForumPost/155127 2019-03-04T20:59:31-05:00 2019-03-04T20:59:31-05:00 @BrokenEagle98: Bitcoin was talked about in topic #11170. The... <p>Bitcoin was talked about in <a class="dtext-link dtext-id-link dtext-forum-topic-id-link" href="/forum_topics/11170">topic #11170</a>. The indication is that it's not very likely. Also, as far as I know, Danbooru gets enough funds in upgrades every month to support existing operations.</p> BrokenEagle98 /users/23799 tag:danbooru.me,2005:ForumPost/155126 2019-03-04T20:34:46-05:00 2019-08-24T19:47:48-04:00 @Bonsai7: Greetings, I wrote a Script for myself to... <p>Greetings,<br>I wrote a Script for myself to download Pictures from here in Python, it uses Beautifulsoup and urllib.<br>it roughly works like this:</p><p>page = requests.get(url, headers=header)<br>for link in soup.find_all(\"a\", href=True):<br> image.append(link.get('href'))</p><p>for every picture on the page do:<br>page2 = requests.get(image_done, headers=header)<br>urllib.request.urlretrieve(image,path)</p><p>In summary if a page has 10 pictures I have 21 Requests(1 to get every href on the page,10 to get every imageadress and 10 to download the image) per Page.</p><p>Since I have no intention of harming the service I would like to know if its ok using my Script to download a good amount of pictures from here?</p><p>I know that bandwith costs money and I would not mind to donate once in a while, so I would like to know if Danbooru has a Bitcoin wallet address? I am aware that you can donate with simply upgrading your Account, however debit/credit card are out of question. </p><p>BTW: I would not mind sharing the Sourcecode here, but I doubt someone would be really interested in it since there are already lots of solutions.</p><p>Even if it pretty late: I did change the script and started using the api which makes it way more easier anyway.</p> Bonsai7 /users/592686