Sketch2aia (Mobile User Interface Sketches)

Introduced by Baulé et al. in Automatic code generation from sketches of mobile applications in end-user development using Deep Learning

Dataset of 374 photos of hand-drawn sketches of App Inventor apps used for development of the Sketch2aia model for automatic generation of App Inventor wireframes from hand-drawn sketches.

Data format Training:2 37 images in JPG (.jpg) format with 720×1280 pixels, each accompanied by a JSON (.json) file with manually attributed bounding box annotation for 10 different classes of UI elements (Screen, Label, Button, Switch, Slider, TextBox, CheckBox, ListPicker, Image and Map), used to train the Sketch2aia model.

Validation: 42 images in JPG (.jpg) format with 720×1280 pixels, each accompanied by a JSON (.json) file with manually attributed bounding box annotation for 10 different classes of UI elements (Screen, Label, Button, Switch, Slider, TextBox, CheckBox, ListPicker, Image and Map), used to test the Sketch2aia model.

Additional Images: 95 images in JPG (.jpg) format with 720×1280 pixels. Some images are accompanied by a JSON (.json) file with manually attributed bounding box annotation for 10 different classes of UI elements (Screen, Label, Button, Switch, Slider, TextBox, CheckBox, ListPicker, Image and Map), while others have not yet been labeled. This portion of the dataset was collected during user evaluation of the Sketch2aia model, and have not been directly used to train or test the object detection model.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


License


  • Unknown

Modalities


Languages