iconSuperDir

Website Info Scraping

How to configure headless browser scraping for ai submission.

When you want to use SuperDir's AI submission feature, you can configure a headless browser to scrape your website information. It crawls your website's title, description, keywords, product images, logo, and other information and fills it in the submission form.

We use Browserless as the headless browser, which offers some free monthly credits and also supports self-hosting.

Set environment variables

To enable the feature, please add the environment variables in your .env file:

.env
BROWSER_LESS_TOKEN=
# only required if you are using self-hosted, then set the base domain
#BBROWSER_LESS_BASE_URL=

This is optional, but we recommend enabling it, which will be enabled in the following cases:

  • When using the fetch API to get the website information fails, SuperDir will automatically use the headless browser to crawl the website information.
  • When the website does not provide an open graph image, SuperDir will automatically use the headless browser to take a screenshot.

On this page