🤗🤗

#4
by ianncity - opened

Really good dataset I will use it, but it seems about half the code is just licensing talk not sure if there's a way you could sort this out or maybe I could on my end in training? seems compute inefficient to train on billions of tokens of licensing talk

Thanks to reach out. let me clarify, The dataset reflects real repos where license text is minimal—usually just 5-10 lines per file. Real codebases include these headers.

For context: You can explore random GitHub repositories to see this pattern - most files have minimal headers

  • This dataset doesnot include LICENSE files at all
  • License headers are often less than 0.1% of total tokens in quality repos
  • Enterprise/production code includes concise license notices (not "billions of tokens")

To clarify: License text in real repositories is minimal. For example, in the HuggingFace Transformers repo:

# Copyright 2023 The HuggingFace Inc. team.
# Licensed under the Apache License, Version 2.0 (the "License");
# ...
# See the License for the specific language governing permissions and
# limitations under the License.

import inspect
import os
import re

from transformers.configuration_utils import PreTrainedConfig
from transformers.utils import direct_transformers_import
nick007x changed discussion status to closed

Sign up or log in to comment