实现向量,纹理,颜色的插值

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
//Screen space rasterization
void rst::rasterizer::rasterize_triangle(const Triangle& t, const std::array<Eigen::Vector3f, 3>& view_pos)
{
// TODO: From your HW3, get the triangle rasterization code.
// TODO: Inside your rasterization loop:
// * v[i].w() is the vertex view space depth value z.
// * Z is interpolated view space depth for the current pixel
// * zp is depth between zNear and zFar, used for z-buffer

auto v = t.toVector4();
// get bounding box of current triangle
std::vector<float> vec_x{ v[0].x(), v[1].x(), v[2].x() };
std::vector<float> vec_y{ v[0].y(), v[1].y(), v[2].y() };
std::sort(vec_x.begin(), vec_x.end());
std::sort(vec_y.begin(), vec_y.end());

int min_x = std::floor(vec_x[0]), max_x = std::ceil(vec_x[2]);
int min_y = std::floor(vec_y[0]), max_y = std::ceil(vec_y[2]);

for (int x = min_x; x <= max_x; x++) {
for (int y = min_y; y <= max_y; y++) {
if (insideTriangle(x + 0.5f, y + 0.5f, t.v)) {
auto [alpha, beta, gamma] = computeBarycentric2D(x + 0.5f, y + 0.5f, t.v);
float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
zp *= Z;

// TODO: Interpolate the attributes:
auto interpolated_color = alpha * t.color[0] + beta * t.color[1] + gamma * t.color[2];
auto interpolated_normal = alpha * t.normal[0] + beta * t.normal[1] + gamma * t.normal[2];
auto interpolated_texcoords = alpha * t.tex_coords[0] + beta * t.tex_coords[1] + gamma * t.tex_coords[2];
auto interpolated_shadingcoords = alpha * view_pos[0] + beta * view_pos[1] + gamma * view_pos[2];

if (zp < depth_buf[get_index(x, y)]) {
fragment_shader_payload payload(interpolated_color, interpolated_normal.normalized(), interpolated_texcoords, texture ? &*texture : nullptr);
payload.view_pos = interpolated_shadingcoords;
auto pixel_color = fragment_shader(payload);
Vector2i point(x, y);
set_pixel(point, pixel_color);
depth_buf[get_index(x, y)] = zp;
}
}
}
}

}

首先同样是在 rasterize_triangle 中实现与作业2类似的光栅化算法,值得注意的地方有一段代码,其余和作业2基本类似,构造出 bounding-box 之后做深度测试更新 depth_bufcolor

透视矫正插值

注意这几行代码:

1
2
3
4
// alpha, beta, gamma 均为经过透视投影过后求出了的插值
float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
zp *= Z;

首先得明确,wz 是什么:

v[0].w() , v[1].w() , v[2].w() 其实存放的是三角形顶点的深度值,但是因为作业框架的原因,在 toVector4() 中,将其全部变成了1

1
2
3
4
5
6
std::array<Vector4f, 3> Triangle::toVector4() const
{
std::array<Vector4f, 3> res;
std::transform(std::begin(v), std::end(v), res.begin(), [](auto& vec) { return Vector4f(vec.x(), vec.y(), vec.z(), 1.f); });
return res;
}

v[0].z() , v[1].z() , v[2].z() 这三个值因为已经做了透视投影和齐次除法,并不是真正应该做插值的深度值

此处我们要做深度测试,就得先获得透视投影之前的深度值,由重心坐标求插值可得:(其中带有 ' 的是投影后的值,没带则为投影前的值)

其实经过推导,我们也可以使用透视投影后的插值系数也能够计算出透视投影之前的重心插值

先说结论:

推导过程:首先可知

将其变形一下:

接着引入透视变换之前的插值系数求插值的公式

可以得出

不难看出

我们可以继续推导出

由此,推导结束,同时我们可以发现,最后变形的公式就是框架中的这段代码:

1
float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());

我们也就使用透视变换过后的系数求出了透视变换之前的 Z

任意属性插值公式

插值公式, I 为三角形内任意目标属性

又因为

因此求得

这也对应了这两行代码:

1
2
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
zp *= Z;

求得了最终正确的深度插值

实现 Blinn-Phong 模型计算 Fragment Color

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
Eigen::Vector3f phong_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

// position and intensity
auto l1 = light{ {20, 20, 20}, {500, 500, 500} };
auto l2 = light{ {-20, 20, 0}, {500, 500, 500} };

std::vector<light> lights = { l1, l2 };
Eigen::Vector3f amb_light_intensity{ 10, 10, 10 };
Eigen::Vector3f eye_pos{ 0, 0, 10 };

float p = 150;

Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

Eigen::Vector3f result_color = { 0, 0, 0 };
auto view_vector = (eye_pos - point).normalized();
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
Vector3f light_vector = (light.position - point).normalized();
// half vector
Vector3f half_vector = (view_vector + light_vector).normalized();
// r^2
float r_square = (light.position - point).squaredNorm();

// diffuse
auto diffuse = kd.cwiseProduct(light.intensity / r_square) * std::max(0.0f, normal.dot(light_vector));

// sepcular
auto sepcular = ks.cwiseProduct(light.intensity / r_square) * std::pow(std::max(0.0f, normal.dot(half_vector)), p);

// ambient
auto ambient = ka.cwiseProduct(amb_light_intensity);

result_color += diffuse + sepcular + ambient;
}

return result_color * 255.f;
}

首先认识一下 Eigen 中的几个 API

  • normalized() : 把向量归一化,也就是方向不变,但模长变成为1
  • squaredNorm() : 返回向量二范数的平方,也就是向量的每个元素的平方和
  • cwiseProduct() : 返回两个向量的逐元素相乘得到的向量
  • dot() : 计算两个向量的点积,结果是一个实数

首先我们求得眼睛观看的方向 view_vector ,注意方向是从物体表面到眼睛的位置,所以是 eye_pos - point 。注意这种只代表方向的向量,记得将其归一化。接着对于每一条光线,求出其光线方向 light_vector 和半程向量 half_vector ,同时也求出光源点到顶点的距离平方,来控制光线能量的衰减。最后 diffusesepcularambient 代入公式求出即可。

实现 Texture Shading Fragment Shader

根据作业文档的解释,我们只需将 Blinn-Phong 模型中的 $K_d$ 视为纹理中的颜色即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
Eigen::Vector3f texture_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f return_color = { 0, 0, 0 };
if (payload.texture)
{
// TODO: Get the texture value at the texture coordinates of the current fragment
// use Bilinear interpolation
return_color = payload.texture->getColorBilinear(payload.tex_coords.x(), payload.tex_coords.y());
}
Eigen::Vector3f texture_color;
texture_color << return_color.x(), return_color.y(), return_color.z();

Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = texture_color / 255.f; // 归一化颜色 RGB 值
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{ {20, 20, 20}, {500, 500, 500} };
auto l2 = light{ {-20, 20, 0}, {500, 500, 500} };

std::vector<light> lights = { l1, l2 };
Eigen::Vector3f amb_light_intensity{ 10, 10, 10 };
Eigen::Vector3f eye_pos{ 0, 0, 10 };

float p = 150;

Eigen::Vector3f color = texture_color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

Eigen::Vector3f result_color = { 0, 0, 0 };
auto view_vector = (eye_pos - point).normalized();
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
Vector3f light_vector = (light.position - point).normalized();
// half vector
Vector3f half_vector = (view_vector + light_vector).normalized();
// r^2
float r_square = (light.position - point).squaredNorm();

// diffuse
auto diffuse = kd.cwiseProduct(light.intensity / r_square) * std::max(0.0f, normal.dot(light_vector));

// sepcular
auto sepcular = ks.cwiseProduct(light.intensity / r_square) * std::pow(std::max(0.0f, normal.dot(half_vector)), p);

// ambient
auto ambient = ka.cwiseProduct(amb_light_intensity);

result_color += diffuse + sepcular + ambient;
}

return result_color * 255.f;
}

此处基本的实现和 Blinn-Phong 模型的实现是大差不差的。但是还是有几个注意还比较容易踩坑的地方。

修改 Texture.hpp 中的 getColor 函数,对其做一个边界限定

1
2
3
4
5
6
7
8
9
10
11
12
13
Eigen::Vector3f getColor(float u, float v)
{
auto u_img = u * width;
auto v_img = (1 - v) * height;

if (u_img < 0) u_img = 0;
if (u_img >= width) u_img = width - 1;
if (v_img < 0) v_img = 0;
if (v_img >= height) v_img = height - 1;

auto color = image_data.at<cv::Vec3b>(v_img, u_img);
return Eigen::Vector3f(color[0], color[1], color[2]);
}

在此处要限定一下 u_imgv_img 的范围,超过了是无法正确读取 color 的。

实现 Bump mapping

原理会在接下来的笔记中补充,此处只讨论实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Eigen::Vector3f bump_fragment_shader(const fragment_shader_payload& payload)
{

Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{ {20, 20, 20}, {500, 500, 500} };
auto l2 = light{ {-20, 20, 0}, {500, 500, 500} };

std::vector<light> lights = { l1, l2 };
Eigen::Vector3f amb_light_intensity{ 10, 10, 10 };
Eigen::Vector3f eye_pos{ 0, 0, 10 };

float p = 150;

Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;


float kh = 0.2, kn = 0.1;

// TODO: Implement bump mapping here
// Let n = normal = (x, y, z)
// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
// Vector ln = (-dU, -dV, 1)
// Normal n = normalize(TBN * ln)

float x = normal.x(), y = normal.y(), z = normal.z();
Vector3f t(x * y / sqrt(x * x + z * z), sqrt(x * x + z * z), z * y / sqrt(x * x + z * z));
Vector3f b = normal.cross(t);
Matrix3f TBN;
TBN <<
t.x(), b.x(), normal.x(),
t.y(), b.y(), normal.y(),
t.z(), b.z(), normal.z();

float u = payload.tex_coords.x();
float v = payload.tex_coords.y();
float w = payload.texture->width;
float h = payload.texture->height;

float dU = kh * kn * (payload.texture->getColor(u + 1 / w, v).norm() - payload.texture->getColor(u, v).norm());
float dV = kh * kn * (payload.texture->getColor(u, v + 1 / h).norm() - payload.texture->getColor(u, v).norm());
Vector3f ln(-dU, -dV, 1.0f);
normal = (TBN * ln).normalized();

Eigen::Vector3f result_color = { 0, 0, 0 };
result_color = normal;

return result_color * 255.f;
}

其实就是把法线贴图的切线空间转回来到视口空间,根据框架的注释基本上能完成

实现 displacement mapping

displacement mapping 就是位移贴图,和凹凸贴图不一样的是,其是真的去移动顶点的位置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
Eigen::Vector3f displacement_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

auto l1 = light{ {20, 20, 20}, {500, 500, 500} };
auto l2 = light{ {-20, 20, 0}, {500, 500, 500} };

std::vector<light> lights = { l1, l2 };
Eigen::Vector3f amb_light_intensity{ 10, 10, 10 };
Eigen::Vector3f eye_pos{ 0, 0, 10 };

float p = 150;

Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;

float kh = 0.2, kn = 0.1;

// TODO: Implement displacement mapping here
// Let n = normal = (x, y, z)
// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
// Vector ln = (-dU, -dV, 1)
// Position p = p + kn * n * h(u,v)
// Normal n = normalize(TBN * ln)
float x = normal.x();
float y = normal.y();
float z = normal.z();

Vector3f t(x * y / sqrt(x * x + z * z), sqrt(x * x + z * z), z * y / sqrt(x * x + z * z));
Vector3f b = normal.cross(t);
Matrix3f TBN;
TBN <<
t.x(), b.x(), normal.x(),
t.y(), b.y(), normal.y(),
t.z(), b.z(), normal.z();

float u = payload.tex_coords.x();
float v = payload.tex_coords.y();
float w = payload.texture->width;
float h = payload.texture->height;

float dU = kh * kn * (payload.texture->getColor(u + 1 / w, v).norm() - payload.texture->getColor(u, v).norm());
float dV = kh * kn * (payload.texture->getColor(u, v + 1 / h).norm() - payload.texture->getColor(u, v).norm());
Vector3f ln(-dU, -dV, 1.0f);
point += (kn * normal * payload.texture->getColor(u, v).norm());
normal = (TBN * ln).normalized();

Eigen::Vector3f result_color = { 0, 0, 0 };

Vector3f view_vector = (eye_pos - point).normalized();
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.

Vector3f light_vector = (light.position - point).normalized();
// half vector
Vector3f half_vector = (view_vector + light_vector).normalized();
// r^2
float r_square = (light.position - point).squaredNorm();

// diffuse
auto diffuse = kd.cwiseProduct(light.intensity / r_square) * std::max(0.0f, normal.dot(light_vector));

// sepcular
auto sepcular = ks.cwiseProduct(light.intensity / r_square) * std::pow(std::max(0.0f, normal.dot(half_vector)), p);

// ambient
auto ambient = ka.cwiseProduct(amb_light_intensity);

result_color += diffuse + sepcular + ambient;
}

return result_color * 255.f;
}

双线性插值

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Eigen::Vector3f getColorBilinear(float u, float v)
{
auto u_img = u * width;
auto v_img = (1 - v) * height;

auto rangeCheck = [width = this->width, height = this->height](float x, bool isU) {
if (x < 0)
return 0.0f;

if (isU && x >= width)
return width - 1.0f;

if (!isU && x >= height)
return height - 1.0f;

return x;
};

float u_min = rangeCheck(std::floor(u_img), true);
float u_max = rangeCheck(std::ceil(u_img), true);
float v_min = rangeCheck(std::floor(v_img), false);
float v_max = rangeCheck(std::ceil(v_img), false);

// get color at u00, u01, u10, u11
auto u00 = image_data.at<cv::Vec3b>(v_max, u_min);
auto u01 = image_data.at<cv::Vec3b>(v_min, u_min);
auto u10 = image_data.at<cv::Vec3b>(v_max, u_max);
auto u11 = image_data.at<cv::Vec3b>(v_min, u_max);

float s = (u_img - u_min) / (u_max - u_min); // range [0, 1]
float t = (v_img - v_min) / (v_max - v_min);

auto lerp = [](float x, cv::Vec3b v0, cv::Vec3b v1) {
return v0 + x * (v1 - v0);
};

auto u0 = lerp(s, u00, u10);
auto u1 = lerp(s, u01, u11);

auto color = lerp(t, u1, u0);
return Eigen::Vector3f(color[0], color[1], color[2]);
}

类似于 getColor ,此处我们同样要做一个边界的检查。此处有几个小坑:

  • 我们定义的光栅化原点在左下角,而 cv 定义的原点在左上角,因此 u00 这个点的坐标就是 (u_min, v_max)。其余的以此类推
  • image_data.at 传入的是 (y, x)
  • 由于 cv 的坐标系原点在左上角,所以我的代码求出来的 t 是不同于课上的截图的,是偏上面的一截,因此在做第二次线性插值的时候,应该是 lerp(t, u1, u0); 而不是 lerp(t, u0, u1);