Good thoughts in 2023

Last updated on:5 months ago

There are good thoughts accumulated in 2023.

Good thought

Age learning

  1. Machine learning is based on the former human experience
  2. Only several types of data are not enough for a humanficial intelligence.
  3. Learning all materials that can be sensed (videos, language, light, wave, etc.)

Outdated

  1. Introduce data augmentation in rotjigsaw

M-CNN

Similar to Gaussian noise, random resize crop

  1. Literature review: MAE, others, check if anyone aleady done this
  2. Data augmentation design can reference MAE
  3. Slice and merge schemes
  4. recover to a larger featuremap is workable?

Pipline:

  1. Slice, select, sew, resize, encode, decode, similarity loss for ori and output
  2. Slice, select, sew, encode, decode, resize, similarity loss for ori and output

needs a considerable number of experiemnts, which we can’t afford now

  1. layer sleep Efficient Self-supervised Continual Learning with Progressive Task-correlated Layer Freezing
  2. channel sleep
  3. kernel sleep Rethinking 1×1 Convolutions: Can we train CNNs with Frozen Random Filters?
  4. instance segmentation on COCO
  5. Design a new decoder for mixupmask (a question: what kinds of decoder can be transferred?)
  6. Step weight freezing in transfer learning (difficult to ensure how many layers needed to be frozen, most of the datasets I used fall in “small” scale region)

Finished

  1. Do BYOL, SAWV, BARLOW and cocor experiments for FullRot. Submit the paper to PR
  2. U-Net for MixupMask
  3. Directly segmentation pretext, fully unsupervised segmentation crop region with random ratio and angle
  4. Can set loss monitor to accumulate different loss by scale, not working, maybe adaptive loss, check multi-loss works
  5. Model does not know which rotated image is the background

Reference

What is the right way to gradually unfreeze layers in neural network while learning with tensorflow?