主催: The Japanese Society for Artificial Intelligence
開催日: 2021/06/08 - 2021/06/11
The standard way to create text ads is to capture searched keywords and the information on their landing pages (LP). However, this coupling of keywords and an LP increases the number of text ads per LP, which makes impossible to create ad texts for all effective combinations of keywords and LPs due to limitation of human resource. We propose a transformer-based ad text generation model using both keywords and LPs to reduce costs and time generating ad texts. We extract tags and texts in LP’s HTML, such as title, h1, h2, fine-tune a pre-trained encoder-decoder model (initialized by BERT2BERT), and HTML tag embeddings, similar to position embeddings, are passed to an input layer. The experimental results demonstrate that our model generates ad texts with a quality close to human-written ones for fluency, attractiveness, and correctness.