Please use this identifier to cite or link to this item:
https://hdl.handle.net/10216/132439
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.creator | João Silva Ferreira | |
dc.creator | André Restivo | |
dc.creator | Hugo Sereno Ferreira | |
dc.date.accessioned | 2022-09-08T21:53:03Z | - |
dc.date.available | 2022-09-08T21:53:03Z | - |
dc.date.issued | 2021 | |
dc.identifier.other | sigarra:444183 | |
dc.identifier.uri | https://hdl.handle.net/10216/132439 | - |
dc.description.abstract | Designers often use physical hand-drawn mockups to convey their ideas to stakeholders. Unfortunately, these sketches do not depict the exact final look and feel of web pages, and communication errors will often occur, resulting in prototypes that do not reflect the stakeholder's vision. Multiple suggestions exist to tackle this problem, mainly in the translation of visual mockups to prototypes. Some authors propose end-to-end solutions by directly generating the final code from a single (black-box) Deep Neural Network. Others propose the use of object detectors, providing more control over the acquired elements but missing out on the mockup's layout. Our approach provides a real-time solution that explores: (1) how to achieve a large variety of sketches that would look indistinguishable from something a human would draw, (2) a pipeline that clearly separates the different responsibilities of extracting and constructing the hierarchical structure of a web mockup, (3) a methodology to segment and extract containers from mockups, (4) the usage of in-sketch annotations to provide more flexibility and control over the generated artifacts, and (5) an assessment of the synthetic dataset impact in the ability to recognize diagrams actually drawn by humans. We start by presenting an algorithm that is capable of generating synthetic mockups. We trained our model (N=8400, Epochs=400) and subsequently fine-tuned it (N=74, Epochs=100) using real human-made diagrams. We accomplished a mAP of 95.37%, with 90% of the tests taking less than 430ms on modest commodity hardware (approximate to 2.3fps). We further provide an ablation study with well-known object detectors to evaluate the synthetic dataset in isolation, showing that the generator achieves a mAP score of 95%, approximate to 1.5 x higher than training using hand-drawn mockups alone. | |
dc.language.iso | eng | |
dc.relation.ispartof | Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications | |
dc.rights | openAccess | |
dc.title | Automatically Generating Websites from Hand-drawn Mockups | |
dc.type | Artigo em Livro de Atas de Conferência Internacional | |
dc.contributor.uporto | Faculdade de Engenharia | |
dc.identifier.doi | 10.5220/0010193600480058 | |
dc.identifier.authenticus | P-00T-FAT | |
Appears in Collections: | FEUP - Artigo em Livro de Atas de Conferência Internacional |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
444183.pdf | 1.83 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.